Nov 28 06:21:16 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 06:21:16 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:16 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 06:21:17 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 06:21:17 crc kubenswrapper[4955]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.542636 4955 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546406 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546539 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546609 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546679 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546740 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546800 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546858 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546925 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.546985 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547043 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547108 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547168 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547226 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547292 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547359 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547419 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547479 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547570 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547632 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547692 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547754 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547824 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547886 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.547945 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548003 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548061 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548120 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548185 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548250 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548311 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548369 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548437 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548497 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548593 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548656 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548722 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548782 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548841 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548902 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.548963 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549034 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549097 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549163 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549223 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549282 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549342 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549400 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549459 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549548 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549633 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549696 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549755 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549813 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549877 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.549956 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550023 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550090 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550152 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550217 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550278 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550337 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550396 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550461 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550597 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550626 4955 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550631 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550636 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550641 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550645 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550649 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.550654 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550789 4955 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550802 4955 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550826 4955 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550833 4955 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550840 4955 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550845 4955 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550852 4955 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550864 4955 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550869 4955 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550873 4955 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550878 4955 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550883 4955 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550887 4955 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550892 4955 flags.go:64] FLAG: --cgroup-root="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550897 4955 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550901 4955 flags.go:64] FLAG: --client-ca-file="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550905 4955 flags.go:64] FLAG: --cloud-config="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550909 4955 flags.go:64] FLAG: --cloud-provider="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550914 4955 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550921 4955 flags.go:64] FLAG: --cluster-domain="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550925 4955 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550931 4955 flags.go:64] FLAG: --config-dir="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550935 4955 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550941 4955 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550948 4955 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550953 4955 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550957 4955 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550962 4955 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550967 4955 flags.go:64] FLAG: --contention-profiling="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550971 4955 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550975 4955 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550980 4955 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550984 4955 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550990 4955 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550995 4955 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.550999 4955 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551003 4955 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551007 4955 flags.go:64] FLAG: --enable-server="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551012 4955 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551019 4955 flags.go:64] FLAG: --event-burst="100" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551023 4955 flags.go:64] FLAG: --event-qps="50" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551028 4955 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551032 4955 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551037 4955 flags.go:64] FLAG: --eviction-hard="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551043 4955 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551048 4955 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551052 4955 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551057 4955 flags.go:64] FLAG: --eviction-soft="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551061 4955 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551066 4955 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551072 4955 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551077 4955 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551081 4955 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551085 4955 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551089 4955 flags.go:64] FLAG: --feature-gates="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551095 4955 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551101 4955 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551106 4955 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551111 4955 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551116 4955 flags.go:64] FLAG: --healthz-port="10248" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551121 4955 flags.go:64] FLAG: --help="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551125 4955 flags.go:64] FLAG: --hostname-override="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551130 4955 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551134 4955 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551138 4955 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551142 4955 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551146 4955 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551151 4955 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551155 4955 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551160 4955 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551165 4955 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551170 4955 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551176 4955 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551181 4955 flags.go:64] FLAG: --kube-reserved="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551186 4955 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551191 4955 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551197 4955 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551201 4955 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551206 4955 flags.go:64] FLAG: --lock-file="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551211 4955 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551216 4955 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551220 4955 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551228 4955 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551232 4955 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551236 4955 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551240 4955 flags.go:64] FLAG: --logging-format="text" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551244 4955 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551249 4955 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551255 4955 flags.go:64] FLAG: --manifest-url="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551259 4955 flags.go:64] FLAG: --manifest-url-header="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551266 4955 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551270 4955 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551276 4955 flags.go:64] FLAG: --max-pods="110" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551280 4955 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551284 4955 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551288 4955 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551292 4955 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551297 4955 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551301 4955 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551305 4955 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551319 4955 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551323 4955 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551328 4955 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551332 4955 flags.go:64] FLAG: --pod-cidr="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551336 4955 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551345 4955 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551349 4955 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551353 4955 flags.go:64] FLAG: --pods-per-core="0" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551357 4955 flags.go:64] FLAG: --port="10250" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551361 4955 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551366 4955 flags.go:64] FLAG: --provider-id="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551370 4955 flags.go:64] FLAG: --qos-reserved="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551374 4955 flags.go:64] FLAG: --read-only-port="10255" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551378 4955 flags.go:64] FLAG: --register-node="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551384 4955 flags.go:64] FLAG: --register-schedulable="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551389 4955 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551397 4955 flags.go:64] FLAG: --registry-burst="10" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551401 4955 flags.go:64] FLAG: --registry-qps="5" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551406 4955 flags.go:64] FLAG: --reserved-cpus="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551410 4955 flags.go:64] FLAG: --reserved-memory="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551417 4955 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551422 4955 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551427 4955 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551431 4955 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551435 4955 flags.go:64] FLAG: --runonce="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551440 4955 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551444 4955 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551449 4955 flags.go:64] FLAG: --seccomp-default="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551453 4955 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551458 4955 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551462 4955 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551467 4955 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551471 4955 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551476 4955 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551480 4955 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551484 4955 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551488 4955 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551492 4955 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551497 4955 flags.go:64] FLAG: --system-cgroups="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551519 4955 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551535 4955 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551541 4955 flags.go:64] FLAG: --tls-cert-file="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551546 4955 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551553 4955 flags.go:64] FLAG: --tls-min-version="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551559 4955 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551564 4955 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551569 4955 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551575 4955 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551581 4955 flags.go:64] FLAG: --v="2" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551588 4955 flags.go:64] FLAG: --version="false" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551594 4955 flags.go:64] FLAG: --vmodule="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551600 4955 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.551605 4955 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551752 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551757 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551762 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551766 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551771 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551776 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551780 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551784 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551787 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551791 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551795 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551800 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551803 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551806 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551810 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551814 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551819 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551823 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551827 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551831 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551835 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551839 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551843 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551847 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551850 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551854 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551857 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551861 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551864 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551867 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551871 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551875 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551879 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551882 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551886 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551890 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551893 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551897 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551900 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551904 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551907 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551911 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551914 4955 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551918 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551921 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551925 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551928 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551932 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551935 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551938 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551942 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551945 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551949 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551953 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551957 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551961 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551965 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551969 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551972 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551976 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551980 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551983 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551988 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551991 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551995 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.551999 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.552002 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.552005 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.552009 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.552012 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.552017 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.552023 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.558799 4955 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.558822 4955 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558890 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558898 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558904 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558909 4955 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558913 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558917 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558921 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558926 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558931 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558935 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558940 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558944 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558949 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558953 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558958 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558962 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558967 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558971 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558975 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558979 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558984 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558988 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558992 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.558997 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559001 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559006 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559010 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559014 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559019 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559024 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559028 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559032 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559037 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559041 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559052 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559057 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559062 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559067 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559071 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559076 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559080 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559085 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559089 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559094 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559098 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559102 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559107 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559111 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559115 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559120 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559124 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559129 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559135 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559144 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559151 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559159 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559165 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559170 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559175 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559180 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559186 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559193 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559198 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559203 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559208 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559212 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559216 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559220 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559226 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559232 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559238 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.559247 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559387 4955 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559397 4955 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559403 4955 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559408 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559413 4955 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559417 4955 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559422 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559427 4955 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559432 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559437 4955 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559441 4955 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559446 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559450 4955 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559455 4955 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559460 4955 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559464 4955 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559469 4955 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559473 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559477 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559482 4955 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559486 4955 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559490 4955 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559495 4955 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559499 4955 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559520 4955 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559524 4955 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559528 4955 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559532 4955 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559537 4955 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559541 4955 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559545 4955 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559549 4955 feature_gate.go:330] unrecognized feature gate: Example Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559555 4955 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559562 4955 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559568 4955 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559573 4955 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559578 4955 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559582 4955 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559585 4955 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559589 4955 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559593 4955 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559596 4955 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559599 4955 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559603 4955 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559608 4955 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559613 4955 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559617 4955 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559620 4955 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559625 4955 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559628 4955 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559632 4955 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559635 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559639 4955 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559643 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559646 4955 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559650 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559655 4955 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559659 4955 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559663 4955 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559668 4955 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559672 4955 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559676 4955 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559679 4955 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559683 4955 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559687 4955 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559691 4955 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559695 4955 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559698 4955 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559701 4955 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559705 4955 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.559709 4955 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.559716 4955 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.559876 4955 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.562702 4955 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.562770 4955 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.563214 4955 server.go:997] "Starting client certificate rotation" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.563234 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.563500 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-14 11:54:47.824202082 +0000 UTC Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.563665 4955 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 389h33m30.260540793s for next certificate rotation Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.567931 4955 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.569366 4955 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.578474 4955 log.go:25] "Validated CRI v1 runtime API" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.600385 4955 log.go:25] "Validated CRI v1 image API" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.601761 4955 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.604154 4955 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-06-17-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.604203 4955 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.627911 4955 manager.go:217] Machine: {Timestamp:2025-11-28 06:21:17.625763633 +0000 UTC m=+0.215019273 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:3d14fd8f-8a80-4dfe-b670-badbf9b65f7b BootID:c8724b23-f7a1-4f7c-bb6a-5c302bc97241 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:b8:78:e8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:b8:78:e8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e5:3d:5d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a0:37:14 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1e:11:aa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d2:0e:b1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:68:95:8c:16:f5 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:aa:3e:35:47:a1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.628340 4955 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.628708 4955 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.629408 4955 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.629794 4955 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.629855 4955 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.630195 4955 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.630216 4955 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.630550 4955 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.630603 4955 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.630889 4955 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.631487 4955 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.632734 4955 kubelet.go:418] "Attempting to sync node with API server" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.632770 4955 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.632828 4955 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.632850 4955 kubelet.go:324] "Adding apiserver pod source" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.632870 4955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.635195 4955 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.635357 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.635453 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.635744 4955 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.635703 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.635870 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.636927 4955 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637645 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637686 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637701 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637714 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637737 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637788 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637807 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637837 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637853 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637867 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637897 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.637911 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.638206 4955 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.638902 4955 server.go:1280] "Started kubelet" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.641595 4955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.641622 4955 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.642726 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.643472 4955 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 06:21:17 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645690 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645824 4955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645854 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:35:12.605076916 +0000 UTC Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645885 4955 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 612h13m54.95919331s for next certificate rotation Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645944 4955 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.645950 4955 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.646592 4955 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.646791 4955 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.647535 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.648029 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.648216 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.650842 4955 factory.go:55] Registering systemd factory Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.649060 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c17653f05987c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:21:17.638858876 +0000 UTC m=+0.228114486,LastTimestamp:2025-11-28 06:21:17.638858876 +0000 UTC m=+0.228114486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.650886 4955 factory.go:221] Registration of the systemd container factory successfully Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.651476 4955 factory.go:153] Registering CRI-O factory Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.651527 4955 factory.go:221] Registration of the crio container factory successfully Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.651619 4955 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.651642 4955 factory.go:103] Registering Raw factory Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.651660 4955 manager.go:1196] Started watching for new ooms in manager Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.652345 4955 manager.go:319] Starting recovery of all containers Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.653761 4955 server.go:460] "Adding debug handlers to kubelet server" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661269 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661413 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661436 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661456 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661469 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661484 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661559 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661575 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661589 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661608 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661627 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661640 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661660 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661682 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661717 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661738 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.661786 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.665798 4955 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.665909 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.665961 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.665986 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666009 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666044 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666080 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666116 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666261 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666288 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666335 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666376 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666400 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666428 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.666487 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.667964 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668049 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668078 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668112 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668140 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668165 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668197 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668221 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668290 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668345 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668369 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668401 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668427 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668461 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668488 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.668555 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669075 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669101 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669132 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669158 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669187 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669261 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669299 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669326 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669365 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669414 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669441 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669466 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669497 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669549 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669580 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669604 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669629 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669658 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669712 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669744 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669764 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669786 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669817 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669839 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.669871 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671291 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671325 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671357 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671395 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671417 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671444 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671465 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671495 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671543 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671568 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671787 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671833 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671880 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671911 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671941 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671972 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.671996 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672026 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672047 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672071 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672098 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672123 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672151 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672174 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672204 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672233 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672256 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672293 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672321 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672400 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672430 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672456 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672502 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672584 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672614 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672645 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672679 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672712 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672756 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672789 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672812 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672840 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672861 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672887 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672906 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672932 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672964 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.672983 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673006 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673029 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673048 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673075 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673097 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673121 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673147 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673167 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673202 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673222 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673240 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673276 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673300 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673325 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673347 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673367 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673399 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673419 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673447 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673468 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673491 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673544 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673565 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673590 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673611 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673731 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673765 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673860 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673896 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673918 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673938 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.673966 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674073 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674095 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674125 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674159 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674181 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674213 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674241 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674275 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674297 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674319 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674354 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674375 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674401 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674422 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674445 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674471 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674494 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674562 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674602 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674623 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674650 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674673 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674704 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674735 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674758 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674787 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674810 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674836 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674857 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674882 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674857 4955 manager.go:324] Recovery completed Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674909 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674928 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674948 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674973 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.674993 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675020 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675040 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675063 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675088 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675108 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675134 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675155 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675178 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675205 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675228 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675255 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675274 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675297 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675344 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675371 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675406 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675427 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.675544 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.676192 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.676225 4955 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.676245 4955 reconstruct.go:97] "Volume reconstruction finished" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.676258 4955 reconciler.go:26] "Reconciler: start to sync state" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.693015 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.695122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.695180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.695201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.696406 4955 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.696441 4955 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.696471 4955 state_mem.go:36] "Initialized new in-memory state store" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.699477 4955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.702854 4955 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.702991 4955 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.703030 4955 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.703103 4955 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 06:21:17 crc kubenswrapper[4955]: W1128 06:21:17.707167 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.707329 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.711121 4955 policy_none.go:49] "None policy: Start" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.712210 4955 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.712256 4955 state_mem.go:35] "Initializing new in-memory state store" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.746869 4955 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.770626 4955 manager.go:334] "Starting Device Plugin manager" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.770711 4955 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.770731 4955 server.go:79] "Starting device plugin registration server" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.771427 4955 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.771448 4955 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.771648 4955 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.771776 4955 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.771796 4955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.779033 4955 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.803649 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.803794 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.805285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.805349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.805370 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.805671 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.806086 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.806152 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807069 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807365 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807580 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.807642 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.808914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.808964 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.808985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.809151 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.809336 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.809384 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810355 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810759 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.810980 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.811275 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.811345 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812085 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812111 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812367 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812406 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.812752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.813156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.813180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.813195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.848195 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.872209 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.873923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.873957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.873972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.874002 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:17 crc kubenswrapper[4955]: E1128 06:21:17.874570 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.878785 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.878851 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.878894 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.878983 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879046 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879101 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879158 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879191 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879235 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879311 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879397 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879443 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879539 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.879648 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981067 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981228 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981262 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981292 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981326 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981365 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981394 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981425 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981454 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981594 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981483 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981672 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981712 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981721 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981758 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981799 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981831 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981843 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981879 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981892 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981888 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981959 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.982002 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981991 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.982065 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.982127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:17 crc kubenswrapper[4955]: I1128 06:21:17.981966 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.075206 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.077065 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.077127 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.077148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.077187 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.077867 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.136830 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.147411 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.170187 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.190709 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.192225 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-cbf27ca29eb98c5819993891eb33cdf6c09537997d6ca713b563f7f48d2c973c WatchSource:0}: Error finding container cbf27ca29eb98c5819993891eb33cdf6c09537997d6ca713b563f7f48d2c973c: Status 404 returned error can't find the container with id cbf27ca29eb98c5819993891eb33cdf6c09537997d6ca713b563f7f48d2c973c Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.195164 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f6c72b10bcb2562ec0b50a03a37f11c2f8eae1afc4df5928e29681f9101735b0 WatchSource:0}: Error finding container f6c72b10bcb2562ec0b50a03a37f11c2f8eae1afc4df5928e29681f9101735b0: Status 404 returned error can't find the container with id f6c72b10bcb2562ec0b50a03a37f11c2f8eae1afc4df5928e29681f9101735b0 Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.196966 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.202167 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-27df22e417ba419e6e7320ac36a9c2b5744a6353cf107f161630539c60982acf WatchSource:0}: Error finding container 27df22e417ba419e6e7320ac36a9c2b5744a6353cf107f161630539c60982acf: Status 404 returned error can't find the container with id 27df22e417ba419e6e7320ac36a9c2b5744a6353cf107f161630539c60982acf Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.249887 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.271453 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-573c7c59358b68941d2ffe64ca87780d3a00a61c7ccb41a175afff4c1d433e5b WatchSource:0}: Error finding container 573c7c59358b68941d2ffe64ca87780d3a00a61c7ccb41a175afff4c1d433e5b: Status 404 returned error can't find the container with id 573c7c59358b68941d2ffe64ca87780d3a00a61c7ccb41a175afff4c1d433e5b Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.479083 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.480857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.480913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.480928 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.480959 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.481733 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.539263 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.539394 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.644266 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.709618 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.709760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"573c7c59358b68941d2ffe64ca87780d3a00a61c7ccb41a175afff4c1d433e5b"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.709886 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.711489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.711559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.711576 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.712166 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6e898f271d4ba528d08d11186dd8c7018c18503cd76694c388d0696e778d7b5a"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.712231 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4a467cbfc575b1d7c23f142b0cdd02d2a546d4bf65cfdfca22ddb6a2cc24240"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.712401 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.713380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.713405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.713414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.714018 4955 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1" exitCode=0 Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.714053 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.714074 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"27df22e417ba419e6e7320ac36a9c2b5744a6353cf107f161630539c60982acf"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.716479 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.716588 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f6c72b10bcb2562ec0b50a03a37f11c2f8eae1afc4df5928e29681f9101735b0"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.717891 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.719699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.719832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.719911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.721389 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60"} Nov 28 06:21:18 crc kubenswrapper[4955]: I1128 06:21:18.721528 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cbf27ca29eb98c5819993891eb33cdf6c09537997d6ca713b563f7f48d2c973c"} Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.848701 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.848796 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:18 crc kubenswrapper[4955]: W1128 06:21:18.977176 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:18 crc kubenswrapper[4955]: E1128 06:21:18.977295 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:19 crc kubenswrapper[4955]: E1128 06:21:19.051339 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Nov 28 06:21:19 crc kubenswrapper[4955]: W1128 06:21:19.137644 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:19 crc kubenswrapper[4955]: E1128 06:21:19.137812 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.97:6443: connect: connection refused" logger="UnhandledError" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.282046 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.285312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.285381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.285401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.285448 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:19 crc kubenswrapper[4955]: E1128 06:21:19.286231 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.97:6443: connect: connection refused" node="crc" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.644058 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.97:6443: connect: connection refused Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.728013 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc" exitCode=0 Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.728168 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.728549 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.730424 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.730495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.730546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.731686 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6e898f271d4ba528d08d11186dd8c7018c18503cd76694c388d0696e778d7b5a" exitCode=0 Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.731740 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6e898f271d4ba528d08d11186dd8c7018c18503cd76694c388d0696e778d7b5a"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.731759 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b93adf18d940000ab5996b12834b1d2dda822b1a3289eaf8979918480cfde9fe" exitCode=0 Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.731811 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b93adf18d940000ab5996b12834b1d2dda822b1a3289eaf8979918480cfde9fe"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.731996 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.733307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.733358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.733376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.734213 4955 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c" exitCode=0 Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.734275 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.734414 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.735490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.735554 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.735574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.738400 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.739437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.739465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.739476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.741073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.741130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.741152 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec"} Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.741162 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.741208 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743819 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:19 crc kubenswrapper[4955]: I1128 06:21:19.743882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.186124 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.746562 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.746634 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.746700 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.746717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.749434 4955 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9541170c0c673766a4d1419facc85b47ab4b5f4ebb6cbc52a6e1698d91f73c0b" exitCode=0 Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.749540 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9541170c0c673766a4d1419facc85b47ab4b5f4ebb6cbc52a6e1698d91f73c0b"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.749693 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.750914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.750959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.750975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.753767 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8ec51985d66d26af8cb2598cf3681efb226637cf70d72ea7091de369dac629fc"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.753879 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.755595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.755631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.755651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.760205 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.761020 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.761075 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.761097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb"} Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.761199 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.762308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.762351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.762370 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.763324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.763353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.763369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.886921 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.888207 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.888262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.888277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:20 crc kubenswrapper[4955]: I1128 06:21:20.888304 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.768803 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f"} Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.768845 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.770615 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.770686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.770712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.774859 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.775066 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f930099672e66cc3321aca1b7b0989695dd62888ab66b55d73af21ad34c6722"} Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.775115 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5b86184d99c735424f9e1fc9f6a1a66ed28c12838bc79341ef44540d6623f914"} Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.775144 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e002261754f61e563f9378735d73ac57fad259e98740f22cd7e4e64aa2ec6f96"} Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.776049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.776110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:21 crc kubenswrapper[4955]: I1128 06:21:21.776129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.782855 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.783054 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3a815d4721fed47507f8cc704bd5c633bb195ccd9d29f67bdc6411b7701b2c9b"} Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.783132 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9705455b2095a9f0e500be50f24418ed56c8f87f007e96c3d3202361db498734"} Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.783134 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.783172 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:22 crc kubenswrapper[4955]: I1128 06:21:22.784613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.186618 4955 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.186711 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.785575 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.785642 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.786839 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.786868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.786879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.786877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.787084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:23 crc kubenswrapper[4955]: I1128 06:21:23.787130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.819722 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.819944 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.821631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.821677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.821695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.883340 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.883600 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.885109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.885171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.885191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.919342 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.919600 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.921120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.921222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:24 crc kubenswrapper[4955]: I1128 06:21:24.921247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.003040 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.420280 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.728730 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.728977 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.730805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.730863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.730882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.791419 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.791550 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.792779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.792832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.792852 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.793169 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.793208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.793225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.909260 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:25 crc kubenswrapper[4955]: I1128 06:21:25.917708 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.035256 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.035598 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.037213 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.037277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.037299 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.794321 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.795756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.795813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:26 crc kubenswrapper[4955]: I1128 06:21:26.795833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:27 crc kubenswrapper[4955]: E1128 06:21:27.779174 4955 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 06:21:27 crc kubenswrapper[4955]: I1128 06:21:27.797422 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:27 crc kubenswrapper[4955]: I1128 06:21:27.798713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:27 crc kubenswrapper[4955]: I1128 06:21:27.798763 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:27 crc kubenswrapper[4955]: I1128 06:21:27.798782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:30 crc kubenswrapper[4955]: W1128 06:21:30.376246 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:21:30 crc kubenswrapper[4955]: I1128 06:21:30.376343 4955 trace.go:236] Trace[1507006041]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:21:20.375) (total time: 10001ms): Nov 28 06:21:30 crc kubenswrapper[4955]: Trace[1507006041]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:21:30.376) Nov 28 06:21:30 crc kubenswrapper[4955]: Trace[1507006041]: [10.001152812s] [10.001152812s] END Nov 28 06:21:30 crc kubenswrapper[4955]: E1128 06:21:30.376365 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:21:30 crc kubenswrapper[4955]: I1128 06:21:30.645085 4955 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:21:30 crc kubenswrapper[4955]: E1128 06:21:30.652451 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Nov 28 06:21:30 crc kubenswrapper[4955]: W1128 06:21:30.887626 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:21:30 crc kubenswrapper[4955]: I1128 06:21:30.887726 4955 trace.go:236] Trace[1420674827]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:21:20.886) (total time: 10000ms): Nov 28 06:21:30 crc kubenswrapper[4955]: Trace[1420674827]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (06:21:30.887) Nov 28 06:21:30 crc kubenswrapper[4955]: Trace[1420674827]: [10.000838746s] [10.000838746s] END Nov 28 06:21:30 crc kubenswrapper[4955]: E1128 06:21:30.887750 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:21:30 crc kubenswrapper[4955]: E1128 06:21:30.889731 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 28 06:21:31 crc kubenswrapper[4955]: W1128 06:21:31.067065 4955 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.067237 4955 trace.go:236] Trace[564643503]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:21:21.065) (total time: 10001ms): Nov 28 06:21:31 crc kubenswrapper[4955]: Trace[564643503]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:21:31.067) Nov 28 06:21:31 crc kubenswrapper[4955]: Trace[564643503]: [10.001362965s] [10.001362965s] END Nov 28 06:21:31 crc kubenswrapper[4955]: E1128 06:21:31.067294 4955 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.079633 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.079680 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.161418 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.161479 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.171183 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 06:21:31 crc kubenswrapper[4955]: I1128 06:21:31.171291 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 06:21:33 crc kubenswrapper[4955]: I1128 06:21:33.186754 4955 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:21:33 crc kubenswrapper[4955]: I1128 06:21:33.186841 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.090139 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.091773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.091799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.091810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.091834 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:34 crc kubenswrapper[4955]: E1128 06:21:34.097027 4955 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.826852 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.827062 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.828335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.828387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:34 crc kubenswrapper[4955]: I1128 06:21:34.828405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.031845 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.032086 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.033257 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.033300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.033317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.050267 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.200185 4955 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.825343 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.826420 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.826469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:35 crc kubenswrapper[4955]: I1128 06:21:35.826486 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.041203 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.042023 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.044089 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.044154 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.044167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.047780 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.166633 4955 trace.go:236] Trace[1798588438]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 06:21:21.432) (total time: 14734ms): Nov 28 06:21:36 crc kubenswrapper[4955]: Trace[1798588438]: ---"Objects listed" error: 14734ms (06:21:36.166) Nov 28 06:21:36 crc kubenswrapper[4955]: Trace[1798588438]: [14.734462203s] [14.734462203s] END Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.166668 4955 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.167132 4955 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.400858 4955 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.583169 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39444->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.583199 4955 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39458->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.583227 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39444->192.168.126.11:17697: read: connection reset by peer" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.583283 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39458->192.168.126.11:17697: read: connection reset by peer" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.644871 4955 apiserver.go:52] "Watching apiserver" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.648223 4955 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.648540 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.648896 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.648992 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.649058 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.649074 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.649318 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.649427 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.649440 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.649540 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.649580 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.661215 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.682094 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.682447 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.682908 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.683277 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.683343 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.683540 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.683556 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.683875 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.712976 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.724604 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.733768 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.742655 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.748242 4955 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.754434 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.766309 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770206 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770246 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770269 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770292 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770323 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770338 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770354 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770373 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770393 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770413 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770438 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770458 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770473 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770525 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770556 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770549 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770585 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770613 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770640 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770662 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770682 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770700 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770722 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770756 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770774 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770810 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770827 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770843 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770863 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770879 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770897 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770917 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770935 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770953 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770970 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770985 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771001 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771018 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771034 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771051 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771066 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771092 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771110 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771126 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771141 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771198 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771217 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771234 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771250 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771265 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771282 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771301 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771317 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771333 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771348 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771366 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771381 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771397 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771412 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771429 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771444 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771460 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771477 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771522 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771573 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771596 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771623 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771648 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771667 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771683 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771701 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771718 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771733 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771748 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771763 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771779 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771810 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772004 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772022 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772037 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772053 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772071 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772086 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772101 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772118 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772136 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772152 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772167 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772183 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772199 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772217 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772232 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772246 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772264 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772279 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772294 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772324 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772345 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772361 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772377 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772411 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772427 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772442 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772458 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772474 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772492 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772522 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772540 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772557 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772573 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772589 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772606 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772623 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772640 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772656 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770827 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772763 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770878 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772993 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.770973 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771017 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771075 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771071 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771094 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771211 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771273 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771396 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771442 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.771559 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772320 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772470 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772519 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.773005 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.773185 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.773577 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.773880 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.774082 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.774413 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.776917 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777433 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777491 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777610 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777639 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777784 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777885 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.777994 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778023 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778045 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778072 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778323 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778367 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778379 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778667 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.778682 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.779291 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.779386 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.779923 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780114 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780113 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780145 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780329 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780385 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780490 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780608 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780657 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780639 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780891 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780915 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780932 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.780975 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.781151 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.781343 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.781430 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.781622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.781747 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.782027 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.782304 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.782528 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.782884 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783033 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783303 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.772674 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783518 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783533 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783540 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783562 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783583 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783607 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783624 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783640 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783659 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783678 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783696 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783713 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783802 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783815 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783822 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783863 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783881 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783902 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783920 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783967 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784010 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784032 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784049 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784066 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784084 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784114 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784132 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784150 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784167 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784184 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784201 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784216 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784234 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784251 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784265 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784283 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784298 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784313 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784329 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784343 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784358 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784375 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784392 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784408 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784424 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784456 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784471 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784487 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784525 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784542 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784562 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784579 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784598 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784616 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784635 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784651 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784667 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784684 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784702 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784720 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784755 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784770 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784786 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784802 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784817 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784832 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784847 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784862 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784878 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784896 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784912 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784929 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784965 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785020 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785059 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785077 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785176 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785196 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785213 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785231 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785255 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785273 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785291 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785309 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785370 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785383 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785395 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785405 4955 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785416 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785428 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785438 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785450 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785463 4955 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785474 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785485 4955 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785497 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785539 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785554 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785566 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785578 4955 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785589 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785600 4955 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785612 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785623 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785634 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785646 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785657 4955 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785667 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785677 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785687 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785696 4955 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785707 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785716 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785724 4955 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785733 4955 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785741 4955 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785750 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785760 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785770 4955 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785779 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785788 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785797 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785807 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785816 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785825 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785834 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785843 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785853 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785863 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785872 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785882 4955 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785891 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785900 4955 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785910 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785919 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785928 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785937 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785946 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785955 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785963 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785973 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785982 4955 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785990 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786003 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786024 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786037 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786049 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786060 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786071 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786083 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786093 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786104 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786116 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786524 4955 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.783984 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784165 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784677 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.784931 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.785419 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786661 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.786764 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.787211 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.788315 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.788772 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.788922 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.788938 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.789068 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:21:37.289017889 +0000 UTC m=+19.878273459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.798644 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789079 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.798666 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789303 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789395 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789724 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.789966 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.790210 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.790086 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.790687 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.790732 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.782676 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791003 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791297 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791388 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791443 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791459 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791446 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791628 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791667 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791833 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.791891 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.792185 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.793551 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.793634 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.793585 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.793646 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.793948 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794160 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794126 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794376 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794397 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794574 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794630 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794814 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.794847 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795007 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795195 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795324 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795427 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795700 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795701 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.795496 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.796348 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.796648 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.796833 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.796919 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797037 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797214 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797470 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797605 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797680 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.797943 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.798266 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.798364 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.799179 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.799430 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.799457 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.799737 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.800072 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.799768 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.800752 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.800383 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.800698 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.801188 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.801832 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803094 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803161 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803181 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803216 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803754 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.803810 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.804793 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.805996 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.806149 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:37.306096686 +0000 UTC m=+19.895352276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.806152 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.806235 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:37.306224 +0000 UTC m=+19.895479590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.806444 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.806962 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.807541 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.808060 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.809727 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.810034 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.810353 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.810434 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.810883 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.811032 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.811423 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.811661 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.811692 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.811757 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.813836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.814089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.814198 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.814270 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.814564 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.815454 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.815522 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.821485 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.821527 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.821539 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.821599 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:37.321583149 +0000 UTC m=+19.910838719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.834073 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.835599 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.836035 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.836191 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f" exitCode=255 Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.836235 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f"} Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.836371 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.836451 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837200 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837311 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837350 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837379 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.837810 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.839930 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.843565 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.844606 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.847202 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.854874 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.855401 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.855685 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.855705 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.855715 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:36 crc kubenswrapper[4955]: E1128 06:21:36.855754 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:37.355740643 +0000 UTC m=+19.944996213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.856434 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.856545 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.858861 4955 scope.go:117] "RemoveContainer" containerID="f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.859902 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.860821 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.860899 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.864361 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.870775 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.872157 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.881774 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.887091 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.887327 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.887458 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.887556 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888049 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888103 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888143 4955 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888160 4955 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888172 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888184 4955 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888224 4955 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888236 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888247 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888258 4955 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: W1128 06:21:36.888074 4955 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888270 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888308 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888320 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888331 4955 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888342 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888352 4955 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888403 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888415 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888425 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888454 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888469 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888480 4955 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888490 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888528 4955 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888540 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888549 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888560 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888571 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888581 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888614 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888624 4955 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888633 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888645 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888657 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888692 4955 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888706 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888715 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888723 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888731 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888739 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888767 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888777 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888787 4955 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888795 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888803 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888812 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888823 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888858 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888801 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888871 4955 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889042 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889055 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889112 4955 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889146 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889223 4955 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889301 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889357 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889417 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889450 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889481 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889491 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889553 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889589 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889598 4955 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889606 4955 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889615 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889624 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889633 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889642 4955 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889651 4955 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889659 4955 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889669 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889678 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889686 4955 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889695 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889704 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889712 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889723 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889731 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889739 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889748 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889756 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889765 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889774 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889782 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889790 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889798 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889807 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889815 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889823 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889832 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889840 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889848 4955 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889857 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889865 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889873 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889881 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889889 4955 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889899 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889922 4955 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889931 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889940 4955 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889948 4955 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889956 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889964 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889973 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889982 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889990 4955 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.889998 4955 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.890006 4955 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.890015 4955 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.890023 4955 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.890033 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891233 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891250 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891261 4955 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891273 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891286 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891299 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891311 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891320 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891329 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891338 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891346 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.891359 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.888289 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.896857 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.897061 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.912192 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.912426 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.936967 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.964665 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.971748 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.974853 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.983451 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.992158 4955 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.992183 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: I1128 06:21:36.992192 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:21:36 crc kubenswrapper[4955]: W1128 06:21:36.993734 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0c8c248837f278a396ce26268799ce2361247b005fc37c5b1550cd18e8b19a81 WatchSource:0}: Error finding container 0c8c248837f278a396ce26268799ce2361247b005fc37c5b1550cd18e8b19a81: Status 404 returned error can't find the container with id 0c8c248837f278a396ce26268799ce2361247b005fc37c5b1550cd18e8b19a81 Nov 28 06:21:37 crc kubenswrapper[4955]: W1128 06:21:37.005971 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d15bd94b5e75bb634c54a32fb97c28735a09c723ecf6b46f104feadc70197070 WatchSource:0}: Error finding container d15bd94b5e75bb634c54a32fb97c28735a09c723ecf6b46f104feadc70197070: Status 404 returned error can't find the container with id d15bd94b5e75bb634c54a32fb97c28735a09c723ecf6b46f104feadc70197070 Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.293714 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.293865 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:21:38.293851285 +0000 UTC m=+20.883106855 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.332868 4955 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.394271 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.394319 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.394341 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.394359 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394420 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394433 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394465 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:38.394453196 +0000 UTC m=+20.983708766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394467 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394482 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394554 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:38.394538168 +0000 UTC m=+20.983793738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394606 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394644 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394656 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394609 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394713 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:38.394696913 +0000 UTC m=+20.983952483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:37 crc kubenswrapper[4955]: E1128 06:21:37.394812 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:38.394785765 +0000 UTC m=+20.984041355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.707119 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.707623 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.708419 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.709017 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.710659 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.711208 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.712397 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.713332 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.713977 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.714905 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.715486 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.716597 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.717101 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.718145 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.718804 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.718742 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.719490 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.720442 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.720871 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.722228 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.723171 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.723715 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.724290 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.724731 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.725330 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.725724 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.726303 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.726915 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.727361 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.727931 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.728354 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.728804 4955 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.728899 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.730157 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.731201 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.733271 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.733747 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.735338 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.736295 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.736775 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.737385 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.738443 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.738886 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.739797 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.740739 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.741421 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.742265 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.742799 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.743726 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.743811 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.744408 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.745185 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.745634 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.746083 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.746952 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.747469 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.748351 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.748789 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vr4bd"] Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.749039 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.750423 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.750608 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.750879 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.757869 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.769744 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.781299 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.792970 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.797115 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xk7\" (UniqueName: \"kubernetes.io/projected/85ba4360-d342-484a-a800-880080b2d0b6-kube-api-access-49xk7\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.797250 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85ba4360-d342-484a-a800-880080b2d0b6-hosts-file\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.807350 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.818215 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.831045 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.840049 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.841474 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.842157 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.842927 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d15bd94b5e75bb634c54a32fb97c28735a09c723ecf6b46f104feadc70197070"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.844178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.844208 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.844223 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ce08de60d19d00d1c383c198da9a9c10fd0b2c2bf9efa251cbd1cd68f86d0720"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.845658 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.845687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0c8c248837f278a396ce26268799ce2361247b005fc37c5b1550cd18e8b19a81"} Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.851720 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.866157 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.879374 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.889120 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.898390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xk7\" (UniqueName: \"kubernetes.io/projected/85ba4360-d342-484a-a800-880080b2d0b6-kube-api-access-49xk7\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.898470 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85ba4360-d342-484a-a800-880080b2d0b6-hosts-file\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.898665 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/85ba4360-d342-484a-a800-880080b2d0b6-hosts-file\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.900474 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.915038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xk7\" (UniqueName: \"kubernetes.io/projected/85ba4360-d342-484a-a800-880080b2d0b6-kube-api-access-49xk7\") pod \"node-resolver-vr4bd\" (UID: \"85ba4360-d342-484a-a800-880080b2d0b6\") " pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.924892 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.947545 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.960650 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.972401 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.985693 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:37 crc kubenswrapper[4955]: I1128 06:21:37.996493 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.007368 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.020961 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.059467 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vr4bd" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.117145 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-n69rx"] Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.117667 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lmmht"] Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.117779 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.117834 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dxhtm"] Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.117945 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tj8bb"] Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.118114 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.118224 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.119435 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.119493 4955 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.119617 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.119802 4955 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.119875 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.120027 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.120875 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.121214 4955 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.121257 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.121326 4955 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.121336 4955 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.121367 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.121369 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.121872 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.122212 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.122620 4955 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.122657 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.122889 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.124979 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.126063 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.126967 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.127081 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.127161 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.127219 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.127290 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.127419 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.137246 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.153144 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.165304 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.177371 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.197480 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199859 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199884 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199899 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199915 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-cni-binary-copy\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199930 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-bin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199944 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-os-release\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.199959 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwbm\" (UniqueName: \"kubernetes.io/projected/308c3fbd-13df-4979-ac4a-ccd4319c48d6-kube-api-access-gpwbm\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200035 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200079 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200160 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-etc-kubernetes\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200200 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cnibin\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-system-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200232 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-socket-dir-parent\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200249 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-k8s-cni-cncf-io\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200265 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-multus-certs\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-system-cni-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200317 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200335 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200351 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200368 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200381 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200397 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200415 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-os-release\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200435 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-kubelet\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200448 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-conf-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200464 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200479 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-cnibin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200494 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-netns\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200521 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-multus\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200540 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200554 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200575 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200623 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200643 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200666 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200682 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-rootfs\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200699 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5dxp\" (UniqueName: \"kubernetes.io/projected/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-kube-api-access-w5dxp\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200715 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200729 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200747 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200791 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-hostroot\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-proxy-tls\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200827 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4xc\" (UniqueName: \"kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200849 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-multus-daemon-config\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200863 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kl2w\" (UniqueName: \"kubernetes.io/projected/765bbe56-be77-4d81-824f-ad16924029f4-kube-api-access-7kl2w\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.200877 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.209101 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.219339 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.230916 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.244074 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.255902 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.275282 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.295713 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301193 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301280 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301365 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.301367 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:21:40.301339996 +0000 UTC m=+22.890595566 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301409 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-os-release\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301469 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-kubelet\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301483 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301498 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301534 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-cnibin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301549 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-netns\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301563 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-multus\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301579 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-conf-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301594 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301627 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301644 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301659 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301663 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-os-release\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301675 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301693 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-rootfs\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301700 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-multus\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301709 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5dxp\" (UniqueName: \"kubernetes.io/projected/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-kube-api-access-w5dxp\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301724 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301725 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301770 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301806 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301796 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-conf-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301833 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-hostroot\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301851 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301854 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301875 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4xc\" (UniqueName: \"kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301968 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-multus-daemon-config\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301997 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kl2w\" (UniqueName: \"kubernetes.io/projected/765bbe56-be77-4d81-824f-ad16924029f4-kube-api-access-7kl2w\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302016 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-proxy-tls\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302030 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302035 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302067 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302043 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302145 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-netns\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.301546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-kubelet\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302154 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-cni-binary-copy\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302172 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302172 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-bin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302197 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-var-lib-cni-bin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302190 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302220 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-os-release\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302236 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwbm\" (UniqueName: \"kubernetes.io/projected/308c3fbd-13df-4979-ac4a-ccd4319c48d6-kube-api-access-gpwbm\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302128 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-cnibin\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302430 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-binary-copy\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302633 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-hostroot\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302665 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302737 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-rootfs\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302739 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302768 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302807 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302851 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302881 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302910 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-os-release\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302839 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.302951 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303004 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-etc-kubernetes\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303025 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-etc-kubernetes\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303043 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303072 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-system-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303088 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303095 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-socket-dir-parent\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-k8s-cni-cncf-io\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303147 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-system-cni-dir\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303143 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-multus-certs\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303169 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-multus-socket-dir-parent\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303186 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-multus-certs\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-system-cni-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303218 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/765bbe56-be77-4d81-824f-ad16924029f4-host-run-k8s-cni-cncf-io\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303224 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cnibin\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303248 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303250 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-system-cni-dir\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303271 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/308c3fbd-13df-4979-ac4a-ccd4319c48d6-cnibin\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303395 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-cni-binary-copy\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.303812 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.308674 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.320799 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.322444 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4xc\" (UniqueName: \"kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc\") pod \"ovnkube-node-tj8bb\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.323668 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwbm\" (UniqueName: \"kubernetes.io/projected/308c3fbd-13df-4979-ac4a-ccd4319c48d6-kube-api-access-gpwbm\") pod \"multus-additional-cni-plugins-n69rx\" (UID: \"308c3fbd-13df-4979-ac4a-ccd4319c48d6\") " pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.326183 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kl2w\" (UniqueName: \"kubernetes.io/projected/765bbe56-be77-4d81-824f-ad16924029f4-kube-api-access-7kl2w\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.337371 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.358068 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.370549 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.383117 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.402199 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.404804 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.404869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.404907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.404961 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.404978 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405019 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:40.404999143 +0000 UTC m=+22.994254713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405076 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405134 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405152 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:40.405135227 +0000 UTC m=+22.994390797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405162 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405184 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405093 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405249 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:40.405227319 +0000 UTC m=+22.994482919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405261 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405276 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.405305 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:40.405297411 +0000 UTC m=+22.994552981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.415543 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.429892 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.430923 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n69rx" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.447285 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.450931 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: W1128 06:21:38.464040 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e192dfd_62ad_4870_b2fd_3c2a09006f6f.slice/crio-8d6b5060096695cdf1c12005a6cf7e007be5281f79f93f7b091eefe614524e33 WatchSource:0}: Error finding container 8d6b5060096695cdf1c12005a6cf7e007be5281f79f93f7b091eefe614524e33: Status 404 returned error can't find the container with id 8d6b5060096695cdf1c12005a6cf7e007be5281f79f93f7b091eefe614524e33 Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.703205 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.703246 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.703263 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.703336 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.703442 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:38 crc kubenswrapper[4955]: E1128 06:21:38.703546 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.850315 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b" exitCode=0 Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.850382 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.850475 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerStarted","Data":"6bc113cbab9d81cd1188397b9bc4ecec943d0048538275d91f35cfdb600eaf50"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.851738 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" exitCode=0 Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.851814 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.851867 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"8d6b5060096695cdf1c12005a6cf7e007be5281f79f93f7b091eefe614524e33"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.853336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vr4bd" event={"ID":"85ba4360-d342-484a-a800-880080b2d0b6","Type":"ContainerStarted","Data":"ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.853383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vr4bd" event={"ID":"85ba4360-d342-484a-a800-880080b2d0b6","Type":"ContainerStarted","Data":"454040cedf2de6fe2dfd452e7ee5c6ae076fdfa43a6add4902eadf2b8c56eac8"} Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.873723 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.890819 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.905195 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.920445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.940079 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.949163 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.957164 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-proxy-tls\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.960525 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.973326 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:38 crc kubenswrapper[4955]: I1128 06:21:38.997043 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.010832 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.027209 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.042358 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.055667 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.071781 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.074208 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.086332 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.100844 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.117841 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.129998 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.141744 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.159868 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.174107 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.186414 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.201731 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.217393 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.234147 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.251424 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.253330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/765bbe56-be77-4d81-824f-ad16924029f4-multus-daemon-config\") pod \"multus-dxhtm\" (UID: \"765bbe56-be77-4d81-824f-ad16924029f4\") " pod="openshift-multus/multus-dxhtm" Nov 28 06:21:39 crc kubenswrapper[4955]: E1128 06:21:39.303187 4955 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 28 06:21:39 crc kubenswrapper[4955]: E1128 06:21:39.303296 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config podName:ad229ad8-9ea1-483d-a615-3f7d2ab408bc nodeName:}" failed. No retries permitted until 2025-11-28 06:21:39.803271863 +0000 UTC m=+22.392527433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config") pod "machine-config-daemon-lmmht" (UID: "ad229ad8-9ea1-483d-a615-3f7d2ab408bc") : failed to sync configmap cache: timed out waiting for the condition Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.342242 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dxhtm" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.351481 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:21:39 crc kubenswrapper[4955]: W1128 06:21:39.355203 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod765bbe56_be77_4d81_824f_ad16924029f4.slice/crio-8753748afe5e26851202cc96e29d48643c152032f855f044f8e31d69386d0776 WatchSource:0}: Error finding container 8753748afe5e26851202cc96e29d48643c152032f855f044f8e31d69386d0776: Status 404 returned error can't find the container with id 8753748afe5e26851202cc96e29d48643c152032f855f044f8e31d69386d0776 Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.392611 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.506572 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.517071 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5dxp\" (UniqueName: \"kubernetes.io/projected/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-kube-api-access-w5dxp\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.819070 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.820186 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad229ad8-9ea1-483d-a615-3f7d2ab408bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-lmmht\" (UID: \"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\") " pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.858499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862601 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862638 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862649 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862660 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862669 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.862679 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.864019 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerStarted","Data":"96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.864055 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerStarted","Data":"8753748afe5e26851202cc96e29d48643c152032f855f044f8e31d69386d0776"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.865774 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5" exitCode=0 Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.865850 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5"} Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.876160 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.890956 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.908172 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.919713 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.936566 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.938790 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: W1128 06:21:39.952693 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad229ad8_9ea1_483d_a615_3f7d2ab408bc.slice/crio-22193b14c5725b684d670e23f27f8461d528e70fa5f17cbd85eb473b26fad85a WatchSource:0}: Error finding container 22193b14c5725b684d670e23f27f8461d528e70fa5f17cbd85eb473b26fad85a: Status 404 returned error can't find the container with id 22193b14c5725b684d670e23f27f8461d528e70fa5f17cbd85eb473b26fad85a Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.953637 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.969297 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:39 crc kubenswrapper[4955]: I1128 06:21:39.989368 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:39Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.008288 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.020956 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.033815 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.047539 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.065915 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.079481 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.090773 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.106418 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.121251 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.161240 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.191087 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.195128 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.201881 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.205022 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.209881 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.223420 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.233162 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.243180 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.259795 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.269404 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.279699 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.290544 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.302232 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.313783 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.324748 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.324901 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.325028 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:21:44.325012971 +0000 UTC m=+26.914268541 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.355901 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.400539 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.426305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.426358 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.426384 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.426474 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426584 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426612 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426624 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426633 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426629 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426738 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:44.426711383 +0000 UTC m=+27.015966993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426640 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426599 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426646 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426888 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:44.426856217 +0000 UTC m=+27.016111797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426920 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:44.426904448 +0000 UTC m=+27.016160118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.426935 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:44.426928949 +0000 UTC m=+27.016184519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.431659 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.472963 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.497423 4955 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.499455 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.499495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.499523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.499632 4955 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.516892 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.565930 4955 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.566154 4955 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.567068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.567117 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.567129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.567145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.567157 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.594365 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.595904 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.599409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.599443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.599455 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.599471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.599482 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.611147 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.614118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.614142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.614149 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.614162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.614171 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.628809 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.632374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.632400 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.632419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.632432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.632441 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.640036 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.652267 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.655659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.655696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.655712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.655733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.655748 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.670190 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.670369 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.671981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.672010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.672024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.672044 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.672060 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.703403 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.703580 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.704015 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.704111 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.704177 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:40 crc kubenswrapper[4955]: E1128 06:21:40.704258 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.736253 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qtmxm"] Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.736777 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.738688 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.738738 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.742273 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.745945 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.756421 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.774395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.774489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.774551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.774575 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.774591 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.792754 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.829475 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz6w\" (UniqueName: \"kubernetes.io/projected/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-kube-api-access-xmz6w\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.829604 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-host\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.829685 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-serviceca\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.841209 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.870540 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110" exitCode=0 Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.870602 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.872883 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.872947 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.873055 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"22193b14c5725b684d670e23f27f8461d528e70fa5f17cbd85eb473b26fad85a"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.876843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.876932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.876956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.876985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.877008 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.883367 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.916151 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.931252 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-host\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.931375 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-serviceca\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.931824 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-host\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.931993 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz6w\" (UniqueName: \"kubernetes.io/projected/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-kube-api-access-xmz6w\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.932968 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-serviceca\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.952963 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:40Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.982660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.982943 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.982955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.982972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.982984 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:40Z","lastTransitionTime":"2025-11-28T06:21:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:40 crc kubenswrapper[4955]: I1128 06:21:40.987779 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz6w\" (UniqueName: \"kubernetes.io/projected/6809f180-bdb9-4c8f-a2de-b90ac9535ed0-kube-api-access-xmz6w\") pod \"node-ca-qtmxm\" (UID: \"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\") " pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.013106 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.053353 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.085642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.085770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.085833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.085894 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.085948 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.095615 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.131682 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.150118 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qtmxm" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.172397 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.187678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.187711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.187719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.187732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.187744 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: W1128 06:21:41.214939 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6809f180_bdb9_4c8f_a2de_b90ac9535ed0.slice/crio-793b4f75db617e67969fc0de9c0f166e4e528db20de76f9fd097f5846f1c6b8b WatchSource:0}: Error finding container 793b4f75db617e67969fc0de9c0f166e4e528db20de76f9fd097f5846f1c6b8b: Status 404 returned error can't find the container with id 793b4f75db617e67969fc0de9c0f166e4e528db20de76f9fd097f5846f1c6b8b Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.220220 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.255519 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.290452 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.290519 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.290530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.290546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.290557 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.295138 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.334410 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.376243 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.393286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.393321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.393332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.393348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.393359 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.412245 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.454330 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.493401 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.495149 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.495191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.495205 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.495222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.495232 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.531449 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.574612 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.597385 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.597423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.597431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.597446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.597461 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.621730 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.668187 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.696746 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.699064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.699115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.699126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.699147 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.699155 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.734166 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.779134 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.801440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.801492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.801523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.801541 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.801553 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.816708 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.856453 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.879121 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9" exitCode=0 Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.879201 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.880639 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qtmxm" event={"ID":"6809f180-bdb9-4c8f-a2de-b90ac9535ed0","Type":"ContainerStarted","Data":"d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.880671 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qtmxm" event={"ID":"6809f180-bdb9-4c8f-a2de-b90ac9535ed0","Type":"ContainerStarted","Data":"793b4f75db617e67969fc0de9c0f166e4e528db20de76f9fd097f5846f1c6b8b"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.885692 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.899987 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.904112 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.904168 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.904187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.904210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.904229 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:41Z","lastTransitionTime":"2025-11-28T06:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.934433 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:41 crc kubenswrapper[4955]: I1128 06:21:41.979009 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:41Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.006352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.006399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.006413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.006434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.006453 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.021426 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.054097 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.095795 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.108385 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.108413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.108420 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.108434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.108444 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.138430 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.176543 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.210320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.210371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.210386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.210403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.210745 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.213596 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.262363 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.296249 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.313015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.313236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.313336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.313437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.313586 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.336952 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.384618 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.414826 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.416328 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.416363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.416374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.416389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.416400 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.453905 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.499637 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.518252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.518298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.518310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.518329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.518341 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.532403 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.577171 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.616089 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.621298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.621443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.621598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.621734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.621844 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.655625 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.699403 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.703599 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.703633 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:42 crc kubenswrapper[4955]: E1128 06:21:42.703773 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:42 crc kubenswrapper[4955]: E1128 06:21:42.703949 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.703805 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:42 crc kubenswrapper[4955]: E1128 06:21:42.704125 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.724640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.724704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.724725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.724752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.724778 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.739127 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.778895 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.822237 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.828392 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.828460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.828701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.828731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.828753 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.872575 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.896017 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0" exitCode=0 Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.896237 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.910200 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.933099 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.933170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.933199 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.933233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.933255 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:42Z","lastTransitionTime":"2025-11-28T06:21:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.937039 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:42 crc kubenswrapper[4955]: I1128 06:21:42.982905 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.024604 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.036490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.036611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.036629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.036654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.036673 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.062596 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.103048 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.139981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.140033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.140048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.140070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.140087 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.152058 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.184452 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.213848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.244307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.244357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.244377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.244403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.244422 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.258372 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.304443 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.336593 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.347823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.347882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.347907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.347936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.347957 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.379578 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.415757 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.451574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.451638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.451656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.451679 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.451696 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.457894 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.502201 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.541873 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.555173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.555270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.555297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.555323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.555342 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.659305 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.659385 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.659411 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.659440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.659465 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.762883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.762941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.762955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.762974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.762989 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.865809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.865838 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.865846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.865861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.865871 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.905048 4955 generic.go:334] "Generic (PLEG): container finished" podID="308c3fbd-13df-4979-ac4a-ccd4319c48d6" containerID="61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6" exitCode=0 Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.905275 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerDied","Data":"61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.920260 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.921339 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.921415 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.921611 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.923910 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.944267 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.959182 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.959674 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.966316 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.968599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.968648 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.968666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.968689 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.968707 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:43Z","lastTransitionTime":"2025-11-28T06:21:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:43 crc kubenswrapper[4955]: I1128 06:21:43.991883 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:43Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.011758 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.027659 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.040996 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.055175 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.067348 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.078221 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.078264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.078276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.078293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.078306 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.085049 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.100204 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.115328 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.130890 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.148920 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.163781 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.177698 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.180588 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.180643 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.180661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.180681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.180694 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.215476 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.256187 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.283439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.283492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.283524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.283546 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.283561 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.299195 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.338848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.367855 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.368116 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.368076452 +0000 UTC m=+34.957332042 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.387302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.387364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.387382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.387409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.387427 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.391296 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.420664 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.460342 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.468886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.468977 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.469016 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.469058 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469082 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469120 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469141 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469205 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469293 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469213 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.469188078 +0000 UTC m=+35.058443678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469410 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469435 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469456 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.469421884 +0000 UTC m=+35.058677494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469465 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469561 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.469476826 +0000 UTC m=+35.058732606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.469634 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.46962149 +0000 UTC m=+35.058877280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.490394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.490484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.490530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.490558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.490576 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.505327 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.538488 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.578410 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.594041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.594082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.594094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.594113 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.594127 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.617247 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.654939 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.697646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.697692 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.697707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.697727 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.697740 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.704301 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.704368 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.704443 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.704498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.704600 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:44 crc kubenswrapper[4955]: E1128 06:21:44.704775 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.800155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.800191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.800200 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.800230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.800238 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.902858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.902912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.902929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.902951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.902969 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:44Z","lastTransitionTime":"2025-11-28T06:21:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.931715 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" event={"ID":"308c3fbd-13df-4979-ac4a-ccd4319c48d6","Type":"ContainerStarted","Data":"a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b"} Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.952642 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.970695 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:44 crc kubenswrapper[4955]: I1128 06:21:44.989765 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:44Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.006461 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.006497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.006525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.006542 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.006554 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.014234 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.042051 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.058398 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.081413 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.105791 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.109180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.109243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.109261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.109284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.109302 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.126834 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.146846 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.167877 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.183966 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.204349 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.216118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.216195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.216208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.216227 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.216241 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.231735 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:45Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.318794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.318845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.318855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.318872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.319193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.422407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.422452 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.422464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.422482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.422494 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.525674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.525734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.525752 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.525778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.525796 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.629181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.629256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.629280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.629309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.629330 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.731686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.731762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.731786 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.731813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.731835 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.834290 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.834334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.834350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.834374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.834388 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.935887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.935931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.935945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.935961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:45 crc kubenswrapper[4955]: I1128 06:21:45.935973 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:45Z","lastTransitionTime":"2025-11-28T06:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.038989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.039025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.039034 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.039050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.039062 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.141051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.141093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.141105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.141123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.141134 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.243404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.243431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.243439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.243451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.243459 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.346936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.347002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.347028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.347059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.347084 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.451131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.451190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.451208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.451236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.451253 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.554286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.554365 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.554382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.554400 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.554412 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.658494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.658901 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.658923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.658948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.658966 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.703404 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.703439 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.703423 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:46 crc kubenswrapper[4955]: E1128 06:21:46.703624 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:46 crc kubenswrapper[4955]: E1128 06:21:46.703783 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:46 crc kubenswrapper[4955]: E1128 06:21:46.703950 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.762739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.762791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.762808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.762832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.762850 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.865834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.865910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.865927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.865951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.865970 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.939635 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/0.log" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.943468 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45" exitCode=1 Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.943554 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.944279 4955 scope.go:117] "RemoveContainer" containerID="6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.966338 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.968353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.968392 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.968407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.968425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.968439 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:46Z","lastTransitionTime":"2025-11-28T06:21:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:46 crc kubenswrapper[4955]: I1128 06:21:46.993483 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:46Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.013868 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.035488 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.058477 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.091248 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:46Z\\\",\\\"message\\\":\\\"ng reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572634 6252 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572970 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:21:46.573065 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:21:46.573093 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:21:46.573104 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:21:46.573113 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:21:46.573125 6252 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.573141 6252 factory.go:656] Stopping watch factory\\\\nI1128 06:21:46.573168 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:21:46.573166 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:21:46.573213 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.110963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.111032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.111058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.111088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.111111 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.113398 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.131585 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.151571 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.169743 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.188588 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.207695 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.213884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.213933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.213950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.214011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.214031 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.246151 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.258439 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.317008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.317084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.317106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.317130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.317149 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.420035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.420103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.420122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.420150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.420166 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.523246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.523292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.523302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.523316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.523325 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.625862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.625913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.625933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.626015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.626034 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.726395 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.728407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.728460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.728480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.728530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.728549 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.743261 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.761603 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.800309 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.820041 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.831383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.831421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.831432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.831450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.831466 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.836550 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.853137 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.868966 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.888178 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.904332 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.918290 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.929878 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.933189 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.933223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.933233 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.933248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.933257 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:47Z","lastTransitionTime":"2025-11-28T06:21:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.941005 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.948184 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/0.log" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.951121 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45"} Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.951538 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.980815 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:46Z\\\",\\\"message\\\":\\\"ng reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572634 6252 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572970 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:21:46.573065 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:21:46.573093 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:21:46.573104 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:21:46.573113 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:21:46.573125 6252 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.573141 6252 factory.go:656] Stopping watch factory\\\\nI1128 06:21:46.573168 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:21:46.573166 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:21:46.573213 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:47 crc kubenswrapper[4955]: I1128 06:21:47.994479 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.007220 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.020418 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.030742 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.035633 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.035695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.035714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.035741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.035795 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.048139 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:46Z\\\",\\\"message\\\":\\\"ng reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572634 6252 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572970 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:21:46.573065 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:21:46.573093 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:21:46.573104 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:21:46.573113 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:21:46.573125 6252 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.573141 6252 factory.go:656] Stopping watch factory\\\\nI1128 06:21:46.573168 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:21:46.573166 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:21:46.573213 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.061232 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.076723 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.089448 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.103331 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.112960 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.123707 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.138683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.138767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.138790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.138811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.138826 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.176256 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.191921 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.205694 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.242022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.242085 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.242099 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.242131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.242151 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.344424 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.344481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.344498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.344554 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.344572 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.446798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.446861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.446880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.446909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.446928 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.549075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.549120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.549136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.549157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.549174 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.652785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.652850 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.652872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.652904 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.652925 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.703687 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.703722 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.703854 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:48 crc kubenswrapper[4955]: E1128 06:21:48.704005 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:48 crc kubenswrapper[4955]: E1128 06:21:48.704129 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:48 crc kubenswrapper[4955]: E1128 06:21:48.704291 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.756214 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.756287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.756304 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.756329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.756354 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.859890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.859952 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.859970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.859995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.860030 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.957633 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/1.log" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.958663 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/0.log" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.962768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.962830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.962853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.962884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.962908 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:48Z","lastTransitionTime":"2025-11-28T06:21:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.964446 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45" exitCode=1 Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.964502 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45"} Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.964591 4955 scope.go:117] "RemoveContainer" containerID="6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.965463 4955 scope.go:117] "RemoveContainer" containerID="97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45" Nov 28 06:21:48 crc kubenswrapper[4955]: E1128 06:21:48.965774 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:21:48 crc kubenswrapper[4955]: I1128 06:21:48.986762 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.026696 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f58b45056b681a1f0e2b692718620f788a9646cb405c771bff388ebe63dcf45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:46Z\\\",\\\"message\\\":\\\"ng reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572634 6252 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.572970 6252 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 06:21:46.573065 6252 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 06:21:46.573093 6252 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 06:21:46.573104 6252 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 06:21:46.573113 6252 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 06:21:46.573125 6252 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:21:46.573141 6252 factory.go:656] Stopping watch factory\\\\nI1128 06:21:46.573168 6252 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 06:21:46.573166 6252 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 06:21:46.573213 6252 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.047389 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066104 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066535 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.066554 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.086728 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.100150 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.116437 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.137739 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.157049 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.169680 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.169736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.169750 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.169777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.169794 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.171651 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.189013 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.207301 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.223030 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.235975 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.272564 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.272597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.272609 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.272625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.272639 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.375927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.375992 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.376013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.376041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.376072 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.479450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.479538 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.479559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.479583 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.479601 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.582789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.582848 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.582866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.582888 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.582905 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.685909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.685963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.685980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.686001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.686019 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.788607 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.788684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.788707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.788745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.788766 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.892200 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.892270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.892286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.892310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.892331 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.971657 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/1.log" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.978052 4955 scope.go:117] "RemoveContainer" containerID="97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45" Nov 28 06:21:49 crc kubenswrapper[4955]: E1128 06:21:49.978300 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.995498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.995617 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.995643 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.995672 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.995695 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:49Z","lastTransitionTime":"2025-11-28T06:21:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:49 crc kubenswrapper[4955]: I1128 06:21:49.998997 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:49Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.023470 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.040748 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.061327 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.080265 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.098584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.098650 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.098674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.098704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.098727 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.100835 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.118980 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.138472 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.160731 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.196058 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.201261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.201308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.201324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.201344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.201360 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.219565 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.235620 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.255445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.272089 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.304781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.304875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.304895 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.304920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.304936 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.407960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.408035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.408059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.408087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.408108 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.511441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.511527 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.511550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.511611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.511630 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.614868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.614941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.614960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.614987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.615006 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.692231 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx"] Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.692962 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.695043 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.695120 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.703677 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.703677 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.703879 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.703685 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.703986 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.704179 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.718063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.718121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.718142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.718170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.718195 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.719288 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.737111 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.742809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.742855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.742873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.742897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.742913 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.754539 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.766337 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.770857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.770915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.770934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.770955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.770971 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.771750 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.789625 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794570 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794678 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.794876 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.808489 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.810986 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.812632 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.812660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.812670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.812686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.812697 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.825572 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.832058 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.835724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.835772 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.835789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.835811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.835830 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.845535 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.849049 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjdm\" (UniqueName: \"kubernetes.io/projected/19a70e1d-140d-47b9-8ad9-3555be91ba0d-kube-api-access-bmjdm\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.849196 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.849246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.849290 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.871751 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: E1128 06:21:50.871989 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.874851 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.874910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.874936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.874970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.874991 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.881717 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.900239 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.918432 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.940408 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.949930 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.950002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmjdm\" (UniqueName: \"kubernetes.io/projected/19a70e1d-140d-47b9-8ad9-3555be91ba0d-kube-api-access-bmjdm\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.950077 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.950119 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.951137 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-env-overrides\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.951611 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.955607 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.960665 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/19a70e1d-140d-47b9-8ad9-3555be91ba0d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.967405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmjdm\" (UniqueName: \"kubernetes.io/projected/19a70e1d-140d-47b9-8ad9-3555be91ba0d-kube-api-access-bmjdm\") pod \"ovnkube-control-plane-749d76644c-rsrvx\" (UID: \"19a70e1d-140d-47b9-8ad9-3555be91ba0d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.977958 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.978255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.978289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.978298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.978318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.978333 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:50Z","lastTransitionTime":"2025-11-28T06:21:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:50 crc kubenswrapper[4955]: I1128 06:21:50.995116 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:50Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.014277 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" Nov 28 06:21:51 crc kubenswrapper[4955]: W1128 06:21:51.027716 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19a70e1d_140d_47b9_8ad9_3555be91ba0d.slice/crio-0e650feed2b6a20ff6c1e397d54a25527fbd8e23bf56f77392e68aedede70817 WatchSource:0}: Error finding container 0e650feed2b6a20ff6c1e397d54a25527fbd8e23bf56f77392e68aedede70817: Status 404 returned error can't find the container with id 0e650feed2b6a20ff6c1e397d54a25527fbd8e23bf56f77392e68aedede70817 Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.081538 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.081880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.082047 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.082241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.082408 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.084624 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.096908 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.109188 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.119785 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.135237 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.149903 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.158570 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.169076 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.183741 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.185047 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.185190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.185282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.185373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.185457 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.197717 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.207724 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.221180 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.234339 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.246136 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.258598 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.271174 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.288197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.288241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.288253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.288273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.288289 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.391250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.391316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.391334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.391366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.391387 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.493118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.493150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.493160 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.493172 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.493183 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.594968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.595001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.595012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.595027 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.595038 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.697715 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.697768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.697778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.697792 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.697802 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.800636 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.800680 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.800690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.800706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.800717 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.810562 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mhptq"] Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.811168 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:51 crc kubenswrapper[4955]: E1128 06:21:51.811225 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.833292 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.853728 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.877236 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.903180 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.903234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.903253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.903283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.903306 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:51Z","lastTransitionTime":"2025-11-28T06:21:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.914165 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.950448 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.959123 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.959203 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmkx\" (UniqueName: \"kubernetes.io/projected/483773b2-23ab-4ebe-8111-f553a0c95523-kube-api-access-qkmkx\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.964202 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.977623 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.985248 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" event={"ID":"19a70e1d-140d-47b9-8ad9-3555be91ba0d","Type":"ContainerStarted","Data":"79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.985290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" event={"ID":"19a70e1d-140d-47b9-8ad9-3555be91ba0d","Type":"ContainerStarted","Data":"9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.985302 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" event={"ID":"19a70e1d-140d-47b9-8ad9-3555be91ba0d","Type":"ContainerStarted","Data":"0e650feed2b6a20ff6c1e397d54a25527fbd8e23bf56f77392e68aedede70817"} Nov 28 06:21:51 crc kubenswrapper[4955]: I1128 06:21:51.995568 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:51Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.005553 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.005580 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.005603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.005619 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.005630 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.009614 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.021716 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.031847 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.043399 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.057037 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.060460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.060546 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmkx\" (UniqueName: \"kubernetes.io/projected/483773b2-23ab-4ebe-8111-f553a0c95523-kube-api-access-qkmkx\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.060596 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.060681 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:52.560663948 +0000 UTC m=+35.149919518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.070495 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.078399 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmkx\" (UniqueName: \"kubernetes.io/projected/483773b2-23ab-4ebe-8111-f553a0c95523-kube-api-access-qkmkx\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.082544 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.097474 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.107256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.107279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.107288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.107303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.107315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.117473 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.128445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.141758 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.154125 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.168212 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.176743 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.189071 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.200712 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.209915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.209978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.209991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.210015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.210028 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.213450 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.225443 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.240268 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.259973 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.277778 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.305919 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.315120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.315189 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.315210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.315235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.315254 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.331213 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.345139 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.418210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.418262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.418274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.418289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.418298 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.465009 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.465176 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:22:08.465152561 +0000 UTC m=+51.054408131 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.521597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.521657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.521670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.521693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.521707 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.566731 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.566809 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.566863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.566899 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.566944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567085 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567096 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567170 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:08.56714053 +0000 UTC m=+51.156396130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567210 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567276 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567227 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:08.567191702 +0000 UTC m=+51.156447312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567305 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567429 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:08.567394087 +0000 UTC m=+51.156649687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567096 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567643 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:53.567601553 +0000 UTC m=+36.156857323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567677 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567718 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567742 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.567803 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:08.567783698 +0000 UTC m=+51.157039308 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.624882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.624968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.624994 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.625051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.625075 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.703739 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.703766 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.703837 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.703910 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.704052 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:52 crc kubenswrapper[4955]: E1128 06:21:52.704185 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.728750 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.728792 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.728808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.728831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.728847 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.831754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.831800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.831811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.831828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.831841 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.934773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.934836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.934861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.934889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:52 crc kubenswrapper[4955]: I1128 06:21:52.934915 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:52Z","lastTransitionTime":"2025-11-28T06:21:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.038169 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.038231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.038247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.038273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.038296 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.141075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.141143 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.141167 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.141202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.141224 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.244444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.244543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.244573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.244621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.244645 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.348473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.348595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.348620 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.348652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.348676 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.452120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.452215 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.452235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.452258 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.452275 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.555244 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.555298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.555315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.555338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.555357 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.578406 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:53 crc kubenswrapper[4955]: E1128 06:21:53.578657 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:53 crc kubenswrapper[4955]: E1128 06:21:53.578747 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:55.578722546 +0000 UTC m=+38.167978156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.659033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.659102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.659121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.659616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.659674 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.703627 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:53 crc kubenswrapper[4955]: E1128 06:21:53.703809 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.763052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.763158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.763226 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.763264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.763347 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.866870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.866925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.866944 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.866967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.866984 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.970389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.970447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.970480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.970550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:53 crc kubenswrapper[4955]: I1128 06:21:53.970574 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:53Z","lastTransitionTime":"2025-11-28T06:21:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.074132 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.074192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.074204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.074224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.074237 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.177731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.177812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.177830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.177853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.177870 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.281894 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.281956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.282001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.282052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.282078 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.384982 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.385049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.385070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.385093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.385110 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.488082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.488140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.488164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.488192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.488215 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.591302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.591368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.591386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.591410 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.591459 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.694238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.694308 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.694343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.694373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.694395 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.703641 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.703681 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:54 crc kubenswrapper[4955]: E1128 06:21:54.703793 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.703650 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:54 crc kubenswrapper[4955]: E1128 06:21:54.703930 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:54 crc kubenswrapper[4955]: E1128 06:21:54.704051 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.797799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.797910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.797929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.797960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.797981 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.902123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.902190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.902204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.902226 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:54 crc kubenswrapper[4955]: I1128 06:21:54.902240 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:54Z","lastTransitionTime":"2025-11-28T06:21:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.005868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.005932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.005960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.005988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.006011 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.108865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.108944 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.108962 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.108988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.109006 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.212640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.212711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.212744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.212774 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.212797 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.315926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.315987 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.316008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.316037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.316058 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.419238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.419306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.419341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.419371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.419398 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.521989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.522051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.522068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.522089 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.522108 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.601125 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:55 crc kubenswrapper[4955]: E1128 06:21:55.601329 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:55 crc kubenswrapper[4955]: E1128 06:21:55.601451 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:21:59.601420484 +0000 UTC m=+42.190676094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.625246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.625295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.625312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.625334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.625351 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.704351 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:55 crc kubenswrapper[4955]: E1128 06:21:55.704598 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.727832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.727877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.727897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.727924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.727951 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.830983 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.831047 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.831064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.831087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.831103 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.933981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.934059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.934097 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.934128 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:55 crc kubenswrapper[4955]: I1128 06:21:55.934147 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:55Z","lastTransitionTime":"2025-11-28T06:21:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.036855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.036899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.036915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.036936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.036956 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.139871 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.140300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.140534 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.140723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.140855 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.243676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.244091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.244245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.244384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.244553 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.350591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.351166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.351332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.351558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.351704 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.454275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.454696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.454864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.455024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.455192 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.558394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.558490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.558545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.558580 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.558603 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.662403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.662819 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.662967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.663184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.663353 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.703323 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.703335 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:56 crc kubenswrapper[4955]: E1128 06:21:56.703854 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:56 crc kubenswrapper[4955]: E1128 06:21:56.703965 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.703387 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:56 crc kubenswrapper[4955]: E1128 06:21:56.704188 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.766193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.766271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.766293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.766324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.766343 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.869086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.869162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.869185 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.869216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.869239 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.972592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.972661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.972679 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.972707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:56 crc kubenswrapper[4955]: I1128 06:21:56.972726 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:56Z","lastTransitionTime":"2025-11-28T06:21:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.076175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.076222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.076238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.076260 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.076275 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.180130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.180560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.180807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.181037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.181298 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.284886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.285271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.285417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.285612 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.285795 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.388881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.388925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.388939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.388957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.388970 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.492863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.492920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.492937 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.492961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.492977 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.596099 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.596468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.596725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.596931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.597074 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.700828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.700911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.700935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.700967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.701000 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.703678 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:57 crc kubenswrapper[4955]: E1128 06:21:57.703931 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.726185 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.750229 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.766300 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.784924 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.803197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.803273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.803290 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.803314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.803360 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.810589 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.831786 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.855946 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.875333 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.893634 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.905870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.905955 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.906010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.906033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.906052 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:57Z","lastTransitionTime":"2025-11-28T06:21:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.915021 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.939323 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.956445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.979047 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:57 crc kubenswrapper[4955]: I1128 06:21:57.999437 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:57Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.008817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.009028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.009176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.009320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.009456 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.012192 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.028019 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:21:58Z is after 2025-08-24T17:21:41Z" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.112926 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.113190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.113337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.113445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.113574 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.216543 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.216589 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.216605 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.216631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.216648 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.319965 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.320030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.320050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.320081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.320103 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.423802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.423872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.423889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.423920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.423940 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.526936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.527005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.527025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.527048 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.527065 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.630483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.630584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.630605 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.630630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.630649 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.703262 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.703295 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.703281 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:21:58 crc kubenswrapper[4955]: E1128 06:21:58.703460 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:21:58 crc kubenswrapper[4955]: E1128 06:21:58.703670 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:21:58 crc kubenswrapper[4955]: E1128 06:21:58.703788 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.734420 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.734495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.734556 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.734586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.734604 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.837989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.838049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.838066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.838092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.838111 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.941558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.941615 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.941631 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.941656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:58 crc kubenswrapper[4955]: I1128 06:21:58.941676 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:58Z","lastTransitionTime":"2025-11-28T06:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.045156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.045224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.045242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.045272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.045292 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.148859 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.148942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.148966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.148999 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.149021 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.251781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.251857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.251880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.251908 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.251931 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.355869 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.355949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.355971 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.356000 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.356025 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.459075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.459177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.459197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.459224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.459243 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.562545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.562607 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.562625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.562654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.562672 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.645340 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:59 crc kubenswrapper[4955]: E1128 06:21:59.645568 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:59 crc kubenswrapper[4955]: E1128 06:21:59.645696 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:07.645666549 +0000 UTC m=+50.234922159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.665531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.665590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.665613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.665640 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.665662 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.703634 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:21:59 crc kubenswrapper[4955]: E1128 06:21:59.703890 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.769176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.769550 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.769589 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.769616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.769634 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.872655 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.872716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.872739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.872767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.872790 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.975231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.975298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.975318 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.975344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:21:59 crc kubenswrapper[4955]: I1128 06:21:59.975360 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:21:59Z","lastTransitionTime":"2025-11-28T06:21:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.078008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.078137 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.078163 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.078193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.078214 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.180965 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.181028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.181049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.181073 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.181092 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.284164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.284225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.284275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.284307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.284327 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.386981 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.387046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.387062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.387084 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.387102 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.490778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.490851 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.490873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.490905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.490926 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.594032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.594089 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.594105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.594129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.594146 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.697464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.697588 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.697610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.697634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.697656 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.703883 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.703906 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.703944 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:00 crc kubenswrapper[4955]: E1128 06:22:00.704042 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:00 crc kubenswrapper[4955]: E1128 06:22:00.704223 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:00 crc kubenswrapper[4955]: E1128 06:22:00.704387 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.801277 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.801336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.801353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.801386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.801409 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.905402 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.905474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.905497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.905560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:00 crc kubenswrapper[4955]: I1128 06:22:00.905586 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:00Z","lastTransitionTime":"2025-11-28T06:22:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.009135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.009231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.009258 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.009292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.009317 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.037559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.037641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.037667 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.037697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.037724 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.063987 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.069979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.070062 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.070088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.070149 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.070176 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.093112 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.102753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.103759 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.103788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.103815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.103833 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.125431 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.131533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.131635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.131699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.131731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.131757 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.152108 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.157363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.157429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.157449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.157477 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.157496 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.180493 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:01Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.180745 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.183886 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.183948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.183969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.184001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.184025 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.287116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.287193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.287205 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.287228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.287240 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.390096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.390150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.390166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.390190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.390207 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.493479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.493587 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.493609 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.493635 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.493655 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.597292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.597374 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.597397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.597427 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.597445 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.700190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.700261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.700283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.700311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.700328 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.703670 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:01 crc kubenswrapper[4955]: E1128 06:22:01.703865 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.803923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.803976 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.803989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.804009 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.804024 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.907561 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.907629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.907649 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.907675 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:01 crc kubenswrapper[4955]: I1128 06:22:01.907727 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:01Z","lastTransitionTime":"2025-11-28T06:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.010901 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.010985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.011003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.011035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.011063 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.114863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.114934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.114959 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.114986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.115006 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.218500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.218597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.218613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.218639 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.218659 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.322013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.322100 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.322124 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.322155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.322178 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.425186 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.425243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.425253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.425266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.425276 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.528271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.528312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.528324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.528341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.528354 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.632002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.632044 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.632054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.632069 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.632080 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.704239 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.704257 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:02 crc kubenswrapper[4955]: E1128 06:22:02.704426 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.704263 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:02 crc kubenswrapper[4955]: E1128 06:22:02.704556 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:02 crc kubenswrapper[4955]: E1128 06:22:02.704852 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.735043 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.735094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.735110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.735138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.735165 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.838188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.838259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.838282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.838310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.838333 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.947146 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.947224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.947247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.947276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:02 crc kubenswrapper[4955]: I1128 06:22:02.947295 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:02Z","lastTransitionTime":"2025-11-28T06:22:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.050193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.050254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.050265 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.050286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.050298 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.154064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.154114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.154125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.154146 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.154160 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.257814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.257905 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.257931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.258025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.258111 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.362155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.362222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.362241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.362267 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.362285 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.465913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.465978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.465996 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.466021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.466038 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.568683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.568743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.568762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.568788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.568806 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.672412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.672469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.672487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.672542 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.672560 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.704299 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:03 crc kubenswrapper[4955]: E1128 06:22:03.704487 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.775302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.775367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.775390 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.775418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.775443 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.878087 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.878141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.878155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.878176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.878192 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.981598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.981758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.981781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.981806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:03 crc kubenswrapper[4955]: I1128 06:22:03.981825 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:03Z","lastTransitionTime":"2025-11-28T06:22:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.084754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.084817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.084840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.084873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.084896 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.188351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.188405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.188432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.188457 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.188476 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.291054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.291158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.291177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.291203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.291220 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.394243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.394320 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.394344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.394375 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.394399 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.497058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.497110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.497125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.497147 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.497164 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.601173 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.601228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.601245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.601269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.601286 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.703369 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.703389 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.703391 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:04 crc kubenswrapper[4955]: E1128 06:22:04.703782 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:04 crc kubenswrapper[4955]: E1128 06:22:04.703989 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704012 4955 scope.go:117] "RemoveContainer" containerID="97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45" Nov 28 06:22:04 crc kubenswrapper[4955]: E1128 06:22:04.704083 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.704580 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.808234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.808302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.808325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.808356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.808377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.890827 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.906111 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.912243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.912309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.912334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.912362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.912381 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:04Z","lastTransitionTime":"2025-11-28T06:22:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.917267 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.944248 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:04 crc kubenswrapper[4955]: I1128 06:22:04.980742 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.001261 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:04Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.015178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.015222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.015234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.015253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.015266 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.019777 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.036325 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.041579 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/1.log" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.046625 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.047567 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.056039 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.073396 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.091078 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.113108 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.117874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.117903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.117915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.117931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.117944 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.125531 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.148310 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.161216 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.176626 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.194603 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.210344 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.220804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.220846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.220857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.220874 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.220887 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.232126 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.241739 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.256049 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.274041 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.286445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.302399 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.314781 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.323120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.323157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.323166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.323178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.323187 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.331081 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.343260 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.353170 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.365409 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.380140 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.390488 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.401486 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.412601 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.425297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.425330 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.425347 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.425363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.425373 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.427717 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.444497 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:05Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.527453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.527530 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.527542 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.527560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.527573 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.630371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.630759 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.630773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.630788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.630800 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.703468 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:05 crc kubenswrapper[4955]: E1128 06:22:05.703707 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.733895 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.733972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.733995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.734023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.734045 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.837562 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.837633 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.837658 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.837688 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.837710 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.940899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.941049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.941070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.941092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:05 crc kubenswrapper[4955]: I1128 06:22:05.941110 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:05Z","lastTransitionTime":"2025-11-28T06:22:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.044612 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.044678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.044698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.044724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.044743 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.052440 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/2.log" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.053216 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/1.log" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.057830 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" exitCode=1 Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.057878 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.057914 4955 scope.go:117] "RemoveContainer" containerID="97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.059734 4955 scope.go:117] "RemoveContainer" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" Nov 28 06:22:06 crc kubenswrapper[4955]: E1128 06:22:06.060246 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.079948 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.106245 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.119999 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.137857 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.147038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.147096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.147119 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.147146 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.147193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.160253 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.187734 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.207719 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.229554 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.241758 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.249586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.249624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.249637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.249654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.249666 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.259351 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97a6f2525a48cb18183e5944c324b8474ac1d7673e964e44ecb49644e09b1e45\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:21:48Z\\\",\\\"message\\\":\\\"4ln5h\\\\nI1128 06:21:48.427665 6391 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI1128 06:21:48.427649 6391 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-dxhtm in node crc\\\\nI1128 06:21:48.427676 6391 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI1128 06:21:48.427682 6391 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-dxhtm after 0 failed attempt(s)\\\\nI1128 06:21:48.427684 6391 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nI1128 06:21:48.427689 6391 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF1128 06:21:48.427681 6391 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network con\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.274173 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.285407 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.295143 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.307662 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.322005 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.333176 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.345263 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:06Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.352423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.352485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.352534 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.352563 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.352585 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.455302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.455347 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.455359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.455376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.455388 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.559191 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.559280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.559303 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.559336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.559362 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.663113 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.663187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.663206 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.663237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.663262 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.703825 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.703875 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.703974 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:06 crc kubenswrapper[4955]: E1128 06:22:06.704195 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:06 crc kubenswrapper[4955]: E1128 06:22:06.704363 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:06 crc kubenswrapper[4955]: E1128 06:22:06.704675 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.766446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.766564 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.766592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.766627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.766652 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.869925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.870003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.870024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.870052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.870071 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.972786 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.972877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.972897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.972927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:06 crc kubenswrapper[4955]: I1128 06:22:06.972944 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:06Z","lastTransitionTime":"2025-11-28T06:22:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.064957 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/2.log" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.071118 4955 scope.go:117] "RemoveContainer" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" Nov 28 06:22:07 crc kubenswrapper[4955]: E1128 06:22:07.071411 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.075951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.076003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.076020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.076046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.076067 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.097761 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.120723 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.140891 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.172773 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.179171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.179270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.179295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.179323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.179342 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.195447 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.218134 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.235184 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.253260 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.274214 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.282849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.282949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.282968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.283031 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.283050 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.298624 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.314594 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.334268 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.353417 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.373687 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.387055 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.387123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.387145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.387174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.387197 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.390315 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.413332 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.429952 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.490468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.490566 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.490586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.490610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.490633 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.593924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.593964 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.593978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.593995 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.594009 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.697190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.697249 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.697272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.697301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.697321 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.703799 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:07 crc kubenswrapper[4955]: E1128 06:22:07.704010 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.722866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:07 crc kubenswrapper[4955]: E1128 06:22:07.723142 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:07 crc kubenswrapper[4955]: E1128 06:22:07.723221 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:23.723193653 +0000 UTC m=+66.312449263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.723921 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.747581 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.768134 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.786886 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.799332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.799399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.799413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.799431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.799443 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.807252 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.822022 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.835491 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.849965 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.865031 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.879424 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.894716 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.902737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.902800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.902826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.902857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.902881 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:07Z","lastTransitionTime":"2025-11-28T06:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.909408 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.930572 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.943445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.962728 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.979550 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:07 crc kubenswrapper[4955]: I1128 06:22:07.992286 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:07Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.006270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.006322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.006341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.006363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.006381 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.109203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.109238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.109253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.109271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.109285 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.212613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.212710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.212735 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.212761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.212782 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.316190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.316699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.316906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.317148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.317340 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.420422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.420482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.420498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.420555 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.420574 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.523630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.523961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.524126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.524265 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.524438 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.532498 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.532689 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:22:40.532652601 +0000 UTC m=+83.121908201 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.628272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.628373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.628420 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.628445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.628462 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.633329 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.633411 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.633452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.633491 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633637 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633662 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633702 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633723 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:40.633701403 +0000 UTC m=+83.222957003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633720 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633723 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633859 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:40.633826447 +0000 UTC m=+83.223082087 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633981 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:40.63393949 +0000 UTC m=+83.223195090 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.633728 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.634046 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.634066 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.634153 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:40.634128375 +0000 UTC m=+83.223383975 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.704064 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.704132 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.704168 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.704280 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.704441 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:08 crc kubenswrapper[4955]: E1128 06:22:08.704699 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.731192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.731262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.731285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.731314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.731337 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.835203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.835259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.835276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.835300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.835320 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.939115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.939199 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.939218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.939243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:08 crc kubenswrapper[4955]: I1128 06:22:08.939260 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:08Z","lastTransitionTime":"2025-11-28T06:22:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.041699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.041760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.041777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.041801 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.041819 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.145072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.145133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.145151 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.145181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.145204 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.247790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.247857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.247873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.247898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.247917 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.352052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.352133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.352156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.352186 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.352210 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.454873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.454946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.454968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.454998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.455021 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.557924 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.557980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.557997 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.558019 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.558036 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.661480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.661585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.661617 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.661646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.661665 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.704311 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:09 crc kubenswrapper[4955]: E1128 06:22:09.704617 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.764337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.764397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.764414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.764475 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.764493 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.867684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.867766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.867784 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.867809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.867826 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.971181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.971255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.971273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.971298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:09 crc kubenswrapper[4955]: I1128 06:22:09.971315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:09Z","lastTransitionTime":"2025-11-28T06:22:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.073710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.073764 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.073781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.073804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.073823 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.176223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.176299 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.176335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.176364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.176385 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.279951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.280029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.280052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.280083 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.280106 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.383680 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.383743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.383761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.383792 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.383810 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.487559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.487657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.487684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.487710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.487776 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.591327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.591399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.591421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.591444 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.591461 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.694354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.694417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.694443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.694473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.694496 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.703881 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.703913 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.703885 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:10 crc kubenswrapper[4955]: E1128 06:22:10.704041 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:10 crc kubenswrapper[4955]: E1128 06:22:10.704197 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:10 crc kubenswrapper[4955]: E1128 06:22:10.704415 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.797451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.797591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.797634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.797682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.797719 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.901438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.901565 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.901594 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.901626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:10 crc kubenswrapper[4955]: I1128 06:22:10.901661 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:10Z","lastTransitionTime":"2025-11-28T06:22:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.005473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.005584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.005604 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.005628 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.005649 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.108351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.108437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.108458 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.108484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.108501 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.211126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.211183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.211201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.211225 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.211243 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.314174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.314234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.314256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.314284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.314307 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.417567 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.417678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.417702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.417730 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.417755 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.426179 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.426238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.426255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.426280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.426298 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.447366 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.452804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.452925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.452956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.452988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.453012 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.475155 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.480053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.480093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.480111 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.480136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.480154 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.500369 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.505099 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.505158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.505179 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.505202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.505218 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.525471 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.530651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.530719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.530742 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.530774 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.530797 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.548938 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:11Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.549251 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.552166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.552235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.552259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.552289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.552313 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.655954 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.656017 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.656038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.656068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.656090 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.704015 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:11 crc kubenswrapper[4955]: E1128 06:22:11.704163 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.759316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.759597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.759764 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.759890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.760016 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.863175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.863768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.863976 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.864204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.864378 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.967432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.967478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.967495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.967547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:11 crc kubenswrapper[4955]: I1128 06:22:11.967565 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:11Z","lastTransitionTime":"2025-11-28T06:22:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.071015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.071077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.071097 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.071122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.071140 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.174363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.174765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.174912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.175054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.175193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.278627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.278687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.278699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.278720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.278735 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.381169 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.381428 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.381534 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.381616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.381681 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.484369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.484436 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.484447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.484468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.484479 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.587529 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.587598 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.587613 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.587637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.587653 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.690340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.690399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.690414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.690437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.690453 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.704051 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.704052 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.704179 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:12 crc kubenswrapper[4955]: E1128 06:22:12.704308 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:12 crc kubenswrapper[4955]: E1128 06:22:12.704533 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:12 crc kubenswrapper[4955]: E1128 06:22:12.704596 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.793607 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.793684 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.793703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.793733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.793753 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.897747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.897831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.897850 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.897883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:12 crc kubenswrapper[4955]: I1128 06:22:12.897903 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:12Z","lastTransitionTime":"2025-11-28T06:22:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.001201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.001263 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.001282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.001306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.001324 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.104064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.104488 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.104765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.104990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.105123 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.208329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.208387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.208403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.208426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.208443 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.311984 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.312037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.312056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.312079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.312097 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.414763 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.415174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.415360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.415553 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.415728 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.518433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.518813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.519071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.519263 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.519444 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.622945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.623002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.623018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.623041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.623058 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.704449 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:13 crc kubenswrapper[4955]: E1128 06:22:13.704708 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.725606 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.725659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.725680 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.725706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.725726 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.828896 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.829015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.829040 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.829074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.829097 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.932931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.933007 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.933019 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.933036 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:13 crc kubenswrapper[4955]: I1128 06:22:13.933048 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:13Z","lastTransitionTime":"2025-11-28T06:22:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.035357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.035432 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.035457 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.035485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.035543 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.138951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.139003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.139020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.139045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.139062 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.243189 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.243273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.243297 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.243325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.243345 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.346990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.348377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.348639 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.348904 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.349167 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.452912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.452966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.452984 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.453010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.453028 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.556371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.556769 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.556993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.557210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.557402 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.662248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.662312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.662328 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.662355 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.662377 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.709445 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.709491 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:14 crc kubenswrapper[4955]: E1128 06:22:14.709712 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:14 crc kubenswrapper[4955]: E1128 06:22:14.709750 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.710090 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:14 crc kubenswrapper[4955]: E1128 06:22:14.710326 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.765360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.765415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.765433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.765456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.765474 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.871408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.871963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.872340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.872476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.872655 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.976382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.976812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.976949 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.977078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:14 crc kubenswrapper[4955]: I1128 06:22:14.977255 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:14Z","lastTransitionTime":"2025-11-28T06:22:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.080333 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.080381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.080393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.080412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.080424 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.184972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.185051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.185071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.185098 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.185117 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.288071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.288121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.288138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.288162 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.288179 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.391533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.391589 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.391601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.391617 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.391629 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.494796 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.494845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.494855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.494872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.494883 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.597883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.597920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.597930 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.597945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.597955 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.701736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.702076 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.702228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.702375 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.702543 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.704921 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:15 crc kubenswrapper[4955]: E1128 06:22:15.705137 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.805570 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.805626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.805644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.805666 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.805685 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.908922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.908979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.908996 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.909019 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:15 crc kubenswrapper[4955]: I1128 06:22:15.909036 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:15Z","lastTransitionTime":"2025-11-28T06:22:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.012644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.012707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.012726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.012749 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.012767 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.115588 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.115907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.116332 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.116622 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.116858 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.222582 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.222983 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.223093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.223183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.223269 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.326039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.326101 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.326118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.326145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.326162 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.429197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.429810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.429921 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.430013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.430109 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.533367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.533736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.533869 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.533991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.534115 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.637158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.637488 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.637890 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.638063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.638249 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.703880 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.703883 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.703895 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:16 crc kubenswrapper[4955]: E1128 06:22:16.704562 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:16 crc kubenswrapper[4955]: E1128 06:22:16.704782 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:16 crc kubenswrapper[4955]: E1128 06:22:16.704365 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.741642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.742038 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.742261 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.742468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.742691 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.846204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.846562 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.846710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.846868 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.847010 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.950250 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.950311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.950328 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.950353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:16 crc kubenswrapper[4955]: I1128 06:22:16.950371 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:16Z","lastTransitionTime":"2025-11-28T06:22:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.053947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.054011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.054029 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.054053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.054073 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.158166 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.158236 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.158264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.158294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.158316 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.262195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.262264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.262286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.262314 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.262338 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.365465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.365600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.365625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.365652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.365672 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.469186 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.469279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.469298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.469322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.469341 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.572316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.572352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.572362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.572380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.572391 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.675683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.675766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.675788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.675825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.675849 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.703606 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:17 crc kubenswrapper[4955]: E1128 06:22:17.703900 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.728718 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.747246 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.766714 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.779421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.779540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.779570 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.779601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.779624 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.787162 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.812375 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.828418 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.850237 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.870254 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.882586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.882645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.882663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.882687 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.882708 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.889643 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.906752 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.927235 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.945008 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.967270 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.988012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.988063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.988080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.988103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.988120 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:17Z","lastTransitionTime":"2025-11-28T06:22:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:17 crc kubenswrapper[4955]: I1128 06:22:17.989297 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:17Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.009036 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.030354 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.058656 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:18Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.091190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.091254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.091270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.091301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.091319 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.194717 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.194833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.194893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.194921 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.195049 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.298360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.298775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.298795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.298818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.298838 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.402451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.402618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.402638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.402662 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.402679 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.505079 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.506051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.506210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.506373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.506499 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.609527 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.609911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.610061 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.610186 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.610311 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.703710 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:18 crc kubenswrapper[4955]: E1128 06:22:18.703903 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.704084 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:18 crc kubenswrapper[4955]: E1128 06:22:18.704404 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.704635 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:18 crc kubenswrapper[4955]: E1128 06:22:18.704853 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.712906 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.712986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.713010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.713041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.713066 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.816293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.816362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.816383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.816412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.816433 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.920283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.920714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.920875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.921024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:18 crc kubenswrapper[4955]: I1128 06:22:18.921169 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:18Z","lastTransitionTime":"2025-11-28T06:22:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.024450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.024537 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.024557 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.024581 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.024598 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.127548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.127601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.127618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.127642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.127660 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.230201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.230280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.230298 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.230322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.230340 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.334569 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.334630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.334647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.334669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.334687 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.438262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.438329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.438349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.438376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.438398 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.541686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.541765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.541788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.541817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.541839 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.645941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.645993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.646010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.646033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.646051 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.703306 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:19 crc kubenswrapper[4955]: E1128 06:22:19.703496 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.761443 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.761571 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.761585 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.761603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.761620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.864933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.864973 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.864985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.865010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.865033 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.968489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.968561 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.968579 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.968602 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:19 crc kubenswrapper[4955]: I1128 06:22:19.968620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:19Z","lastTransitionTime":"2025-11-28T06:22:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.071300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.071353 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.071373 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.071397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.071415 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.173708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.173767 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.173785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.173808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.173826 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.276753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.276814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.276828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.276846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.276858 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.380341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.380409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.380431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.380459 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.380481 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.484071 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.484128 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.484148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.484170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.484187 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.587021 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.587081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.587101 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.587125 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.587142 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.689840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.689904 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.689921 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.689943 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.689960 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.704196 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.704272 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:20 crc kubenswrapper[4955]: E1128 06:22:20.704395 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.704490 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:20 crc kubenswrapper[4955]: E1128 06:22:20.704697 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:20 crc kubenswrapper[4955]: E1128 06:22:20.704798 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.792482 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.792574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.792592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.792616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.792633 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.895321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.895381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.895399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.895421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.895438 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.998050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.998112 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.998129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.998247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:20 crc kubenswrapper[4955]: I1128 06:22:20.998265 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:20Z","lastTransitionTime":"2025-11-28T06:22:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.100823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.100883 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.100900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.100922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.100943 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.203939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.204037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.204056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.204121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.204141 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.307205 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.307272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.307296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.307325 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.307347 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.409766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.409819 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.409835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.409856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.409875 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.512914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.512966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.512983 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.513007 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.513024 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.616210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.616254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.616271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.616294 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.616310 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.703314 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.703482 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.704538 4955 scope.go:117] "RemoveContainer" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.704807 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.718817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.718863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.718873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.718885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.718897 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.808239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.808276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.808286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.808301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.808311 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.821440 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.824414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.824547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.824573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.824603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.824626 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.838878 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.843295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.843341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.843359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.843382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.843400 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.859271 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.864646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.864878 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.865232 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.865686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.865917 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.881698 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.885782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.885833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.885849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.885869 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.885882 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.902589 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:21Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:21 crc kubenswrapper[4955]: E1128 06:22:21.902801 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.950300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.950327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.950335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.950348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:21 crc kubenswrapper[4955]: I1128 06:22:21.950359 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:21Z","lastTransitionTime":"2025-11-28T06:22:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.053376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.053463 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.053548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.053586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.053609 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.157276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.157356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.157381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.157415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.157439 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.260634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.260695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.260713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.260737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.260757 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.363248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.363276 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.363287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.363301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.363310 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.465863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.465900 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.465911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.465925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.465936 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.567558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.567594 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.567605 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.567618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.567630 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.670408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.670446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.670456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.670469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.670480 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.703338 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.703393 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.703338 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:22 crc kubenswrapper[4955]: E1128 06:22:22.703464 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:22 crc kubenswrapper[4955]: E1128 06:22:22.703632 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:22 crc kubenswrapper[4955]: E1128 06:22:22.703807 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.773059 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.773118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.773136 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.773161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.773179 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.875418 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.875524 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.875537 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.875552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.875565 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.977740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.977792 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.977809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.977830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:22 crc kubenswrapper[4955]: I1128 06:22:22.977847 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:22Z","lastTransitionTime":"2025-11-28T06:22:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.081405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.081560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.081578 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.081602 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.081623 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.186555 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.186665 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.186740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.186777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.186803 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.289802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.289844 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.289855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.289870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.289881 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.395681 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.395773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.395802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.395836 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.395871 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.498741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.498804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.498821 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.498845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.498863 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.602375 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.602464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.602489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.602556 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.602576 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.703611 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:23 crc kubenswrapper[4955]: E1128 06:22:23.703985 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.708195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.708243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.708262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.708290 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.708315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.810935 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.810979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.810991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.811008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.811023 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.816788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:23 crc kubenswrapper[4955]: E1128 06:22:23.816974 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:23 crc kubenswrapper[4955]: E1128 06:22:23.817049 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:22:55.817029596 +0000 UTC m=+98.406285246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.913922 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.913963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.913974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.913990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:23 crc kubenswrapper[4955]: I1128 06:22:23.914002 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:23Z","lastTransitionTime":"2025-11-28T06:22:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.016787 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.016879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.016912 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.016947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.016968 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.120036 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.120076 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.120088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.120103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.120117 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.222458 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.222565 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.222584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.222608 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.222626 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.325825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.325864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.325876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.325893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.325907 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.428273 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.428324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.428339 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.428363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.428380 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.530790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.530847 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.530871 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.530899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.530920 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.633744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.633801 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.633818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.633843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.633861 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.704290 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.704318 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.704407 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:24 crc kubenswrapper[4955]: E1128 06:22:24.704578 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:24 crc kubenswrapper[4955]: E1128 06:22:24.704662 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:24 crc kubenswrapper[4955]: E1128 06:22:24.704779 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.736656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.736703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.736717 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.736734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.736745 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.839218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.839287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.839309 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.839338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.839359 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.942709 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.942748 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.942758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.942776 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:24 crc kubenswrapper[4955]: I1128 06:22:24.942785 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:24Z","lastTransitionTime":"2025-11-28T06:22:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.046483 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.046565 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.046583 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.046607 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.046624 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.134711 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/0.log" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.134793 4955 generic.go:334] "Generic (PLEG): container finished" podID="765bbe56-be77-4d81-824f-ad16924029f4" containerID="96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5" exitCode=1 Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.134836 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerDied","Data":"96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.135349 4955 scope.go:117] "RemoveContainer" containerID="96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.152144 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.152177 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.152187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.152201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.152211 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.154338 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.178856 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.192774 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.205485 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.222781 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.238169 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.253746 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.254885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.254969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.254993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.255025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.255047 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.273555 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.286635 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.306382 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.320784 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.332775 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.359699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.359758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.359774 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.359797 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.359814 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.365065 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.378557 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.391128 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.400270 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.408661 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:25Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.461973 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.462012 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.462020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.462033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.462041 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.563785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.563829 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.563844 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.563863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.563872 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.666782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.666859 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.666872 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.666888 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.666900 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.704212 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:25 crc kubenswrapper[4955]: E1128 06:22:25.704665 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.769723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.769813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.770252 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.770334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.770658 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.873982 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.874032 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.874050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.874073 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.874090 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.977412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.977453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.977469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.977493 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:25 crc kubenswrapper[4955]: I1128 06:22:25.977542 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:25Z","lastTransitionTime":"2025-11-28T06:22:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.080348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.080380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.080388 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.080402 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.080411 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.139379 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/0.log" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.139413 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerStarted","Data":"a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.155857 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.171702 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.182907 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.182934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.182942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.182954 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.182964 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.186181 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.197642 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.209357 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.218850 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.230320 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.241249 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.253112 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.274044 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.285777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.285803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.285815 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.285830 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.285842 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.286940 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.303518 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.315400 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.329439 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.349967 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.364593 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.380878 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:26Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.388448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.388518 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.388531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.388545 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.388554 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.491770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.491822 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.491834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.491856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.491869 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.594339 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.594395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.594417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.594446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.594467 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.696899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.696960 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.696978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.697001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.697021 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.704162 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:26 crc kubenswrapper[4955]: E1128 06:22:26.704423 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.704263 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:26 crc kubenswrapper[4955]: E1128 06:22:26.704639 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.704178 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:26 crc kubenswrapper[4955]: E1128 06:22:26.705113 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.803841 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.803879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.803889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.803910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.803921 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.906181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.906224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.906235 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.906251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:26 crc kubenswrapper[4955]: I1128 06:22:26.906264 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:26Z","lastTransitionTime":"2025-11-28T06:22:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.009190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.009230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.009240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.009256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.009267 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.111413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.111445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.111456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.111469 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.111477 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.213817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.213882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.213899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.213923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.213967 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.315892 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.315939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.315948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.315963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.315972 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.418018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.418092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.418117 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.418147 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.418171 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.521025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.521104 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.521129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.521158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.521180 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.624351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.624406 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.624422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.624446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.624464 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.703808 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:27 crc kubenswrapper[4955]: E1128 06:22:27.704052 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.721069 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.726435 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.726495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.726552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.726584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.726604 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.739361 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.759550 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.784956 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.795324 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.807300 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.817142 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.827182 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.828329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.828362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.828371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.828387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.828396 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.839071 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.855653 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.866680 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.927173 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.930462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.930489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.930497 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.930525 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.930534 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:27Z","lastTransitionTime":"2025-11-28T06:22:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.942432 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.953840 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.967662 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.978923 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:27 crc kubenswrapper[4955]: I1128 06:22:27.988847 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:27Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.032915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.033008 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.033026 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.033049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.033068 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.135142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.135229 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.135241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.135258 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.135269 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.238566 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.238652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.238677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.238708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.238726 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.342552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.342621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.342647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.342678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.342703 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.446028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.446078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.446095 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.446118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.446135 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.549174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.549241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.549253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.549274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.549288 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.651761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.651800 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.651811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.651828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.651840 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.704270 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.704374 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.704535 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:28 crc kubenswrapper[4955]: E1128 06:22:28.704540 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:28 crc kubenswrapper[4955]: E1128 06:22:28.704695 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:28 crc kubenswrapper[4955]: E1128 06:22:28.704872 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.754766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.754832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.754852 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.754881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.754900 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.857129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.857188 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.857204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.857230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.857249 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.960442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.960556 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.960597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.960632 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:28 crc kubenswrapper[4955]: I1128 06:22:28.960657 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:28Z","lastTransitionTime":"2025-11-28T06:22:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.063355 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.063404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.063414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.063433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.063451 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.166217 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.166248 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.166257 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.166274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.166284 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.269965 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.270011 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.270023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.270041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.270052 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.372656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.372712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.372723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.372742 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.372755 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.475238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.475381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.475398 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.475419 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.475435 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.578417 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.578484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.578528 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.578553 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.578571 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.680932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.681005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.681027 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.681061 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.681085 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.704434 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:29 crc kubenswrapper[4955]: E1128 06:22:29.704701 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.783176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.783245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.783255 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.783268 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.783279 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.886396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.886454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.886464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.886484 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.886516 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.989639 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.989706 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.989720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.989761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:29 crc kubenswrapper[4955]: I1128 06:22:29.989774 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:29Z","lastTransitionTime":"2025-11-28T06:22:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.092760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.092816 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.092826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.092845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.092854 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.194899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.194970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.194988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.195020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.195038 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.297823 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.297880 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.297893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.297911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.297924 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.401266 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.401344 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.401367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.401398 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.401425 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.504785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.504846 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.504858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.504881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.504893 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.608331 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.608401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.608425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.608452 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.608469 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.704139 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.704189 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.704194 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:30 crc kubenswrapper[4955]: E1128 06:22:30.704418 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:30 crc kubenswrapper[4955]: E1128 06:22:30.704582 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:30 crc kubenswrapper[4955]: E1128 06:22:30.704702 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.711317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.711378 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.711397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.711422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.711440 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.814689 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.814754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.814771 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.814794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.814812 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.917319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.917383 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.917402 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.917425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:30 crc kubenswrapper[4955]: I1128 06:22:30.917479 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:30Z","lastTransitionTime":"2025-11-28T06:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.019864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.019911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.019923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.019943 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.019956 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.123810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.123860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.123873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.123889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.123900 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.226700 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.226747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.226762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.226782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.226797 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.329877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.329925 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.329942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.329967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.329988 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.433558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.433610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.433626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.433648 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.433667 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.539686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.539746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.539763 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.539788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.539805 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.642662 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.642707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.642903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.642920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.642929 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.704065 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:31 crc kubenswrapper[4955]: E1128 06:22:31.704241 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.745283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.745317 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.745330 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.745346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.745358 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.848349 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.848426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.848446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.848547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.848567 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.951783 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.951855 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.951871 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.951897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:31 crc kubenswrapper[4955]: I1128 06:22:31.951932 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:31Z","lastTransitionTime":"2025-11-28T06:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.055055 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.055121 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.055140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.055164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.055183 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.158885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.158963 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.158989 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.159020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.159044 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.224805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.224887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.224913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.224944 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.224966 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.244106 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.249705 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.249813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.249832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.249857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.249896 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.268377 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.274790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.274870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.274887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.274914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.274933 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.293781 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.298676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.298730 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.298747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.298770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.298789 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.319155 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.324951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.325028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.325051 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.325082 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.325098 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.343681 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:32Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.344028 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.346625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.346701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.346724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.346753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.346776 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.450329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.450395 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.450413 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.450441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.450459 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.553659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.553754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.553789 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.553826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.553848 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.656910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.656967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.656986 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.657014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.657032 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.703790 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.703890 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.703825 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.704039 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.704331 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:32 crc kubenswrapper[4955]: E1128 06:22:32.704201 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.759407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.759476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.759492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.759571 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.759591 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.863223 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.863284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.863301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.863326 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.863344 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.966069 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.966115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.966130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.966150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:32 crc kubenswrapper[4955]: I1128 06:22:32.966165 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:32Z","lastTransitionTime":"2025-11-28T06:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.069575 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.069647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.069660 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.069702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.069717 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.172610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.172682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.172700 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.172725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.172743 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.275661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.275741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.275762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.275790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.275807 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.379240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.379310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.379331 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.379358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.379383 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.482556 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.482647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.482664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.482688 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.482712 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.585401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.585584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.585667 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.585703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.585793 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.689956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.690024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.690040 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.690065 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.690084 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.704361 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:33 crc kubenswrapper[4955]: E1128 06:22:33.704575 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.793384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.793471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.793493 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.793540 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.793557 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.896626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.896744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.896762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.896785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:33 crc kubenswrapper[4955]: I1128 06:22:33.896801 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:33Z","lastTransitionTime":"2025-11-28T06:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.000479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.000596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.000622 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.000656 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.000680 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.103641 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.103699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.103716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.103739 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.103755 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.206537 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.206593 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.206611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.206634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.206651 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.309865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.309933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.309950 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.309974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.309993 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.412492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.412586 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.412608 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.412633 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.412650 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.515729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.515790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.515806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.515831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.515849 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.619744 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.619840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.619856 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.619879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.619897 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.704262 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.704262 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:34 crc kubenswrapper[4955]: E1128 06:22:34.704470 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.704298 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:34 crc kubenswrapper[4955]: E1128 06:22:34.704707 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:34 crc kubenswrapper[4955]: E1128 06:22:34.704825 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.722682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.722745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.722768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.722799 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.722822 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.826300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.826386 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.826412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.826448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.826471 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.930473 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.930597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.930623 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.930653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:34 crc kubenswrapper[4955]: I1128 06:22:34.930675 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:34Z","lastTransitionTime":"2025-11-28T06:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.033548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.033595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.033610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.033630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.033645 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.137808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.137876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.137894 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.137920 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.137938 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.241775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.241845 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.241867 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.241898 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.241920 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.345717 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.345812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.346111 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.346442 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.346495 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.448899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.448977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.449001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.449028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.449044 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.552352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.552425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.552449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.552478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.552500 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.655412 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.655462 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.655480 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.655532 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.655551 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.704393 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:35 crc kubenswrapper[4955]: E1128 06:22:35.705235 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.705805 4955 scope.go:117] "RemoveContainer" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.758295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.758364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.758385 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.758415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.758437 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.861625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.861701 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.861723 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.861756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.861779 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.964966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.965013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.965028 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.965049 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:35 crc kubenswrapper[4955]: I1128 06:22:35.965061 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:35Z","lastTransitionTime":"2025-11-28T06:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.067472 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.067554 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.067573 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.067604 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.067620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.169295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.169364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.169379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.169426 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.169439 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.178946 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/2.log" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.182184 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.182500 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.214373 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.237413 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.254696 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.273611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.273673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.273691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.273741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.273761 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.274928 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.295026 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.314343 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.335560 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.357342 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.376597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.376677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.376717 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.376740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.376756 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.381121 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.405266 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.422699 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.439914 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.451618 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.467499 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.477785 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.479138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.479178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.479189 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.479206 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.479219 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.494950 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.506292 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:36Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.581599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.581644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.581659 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.581676 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.581689 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.684716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.684777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.684795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.684820 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.684837 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.704094 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.704151 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:36 crc kubenswrapper[4955]: E1128 06:22:36.704222 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.704171 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:36 crc kubenswrapper[4955]: E1128 06:22:36.704314 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:36 crc kubenswrapper[4955]: E1128 06:22:36.704422 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.787430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.787470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.787478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.787492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.787515 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.889939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.889967 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.889975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.889988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.889997 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.993052 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.993092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.993101 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.993115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:36 crc kubenswrapper[4955]: I1128 06:22:36.993124 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:36Z","lastTransitionTime":"2025-11-28T06:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.095621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.095664 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.095674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.095690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.095703 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.189115 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.190259 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/2.log" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.194105 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" exitCode=1 Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.194169 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.194216 4955 scope.go:117] "RemoveContainer" containerID="c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.195580 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:22:37 crc kubenswrapper[4955]: E1128 06:22:37.196027 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.200216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.200285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.200306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.200329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.200347 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.217710 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.236171 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.249702 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.266445 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.283693 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.303285 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.304367 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.304415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.304433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.304456 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.304474 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.321297 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.343866 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.361171 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.381408 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.401405 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.409698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.409755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.409794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.409820 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.409842 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.420065 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.450848 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:37Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:22:36.777771 6964 factory.go:656] Stopping watch factory\\\\nI1128 06:22:36.777831 6964 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:22:36.777888 6964 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778011 6964 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778266 6964 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.791885 6964 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:22:36.791913 6964 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:22:36.791981 6964 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:36.792016 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:36.792126 6964 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.468054 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.483349 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.494593 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.508004 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.512790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.512831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.512843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.512860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.512870 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.615704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.615747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.615758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.615775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.615786 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.704223 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:37 crc kubenswrapper[4955]: E1128 06:22:37.704400 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.719706 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.720661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.720694 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.720709 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.720729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.720745 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.726037 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.748385 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.763945 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.781218 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.802826 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.823005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.823058 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.823080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.823110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.823131 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.828128 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.846230 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.865006 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.880118 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.898497 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.918711 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.926502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.926606 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.926624 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.926651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.926674 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:37Z","lastTransitionTime":"2025-11-28T06:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.937134 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.956273 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:37 crc kubenswrapper[4955]: I1128 06:22:37.977078 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.000549 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8a106b048bd750dc277f8c67afd46abf303ad104eb00d5c2f5ba0a44f592ae6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:05Z\\\",\\\"message\\\":\\\"Rule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"192.168.126.11\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string(nil), Groups:[]string(nil)}}\\\\nI1128 06:22:05.598592 6607 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1128 06:22:05.598611 6607 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:default/kubernetes]} name:Service_default/kubernetes_TCP_node_router_crc options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.1:443:169.254.0.2:6443]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {4de02fb8-85f8-4208-9384-785ba5457d16}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1128 06:22:05.598667 6607 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1128 06:22:05.598694 6607 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:05.598712 6607 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:05.598761 6607 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:37Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:22:36.777771 6964 factory.go:656] Stopping watch factory\\\\nI1128 06:22:36.777831 6964 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:22:36.777888 6964 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778011 6964 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778266 6964 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.791885 6964 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:22:36.791913 6964 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:22:36.791981 6964 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:36.792016 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:36.792126 6964 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:37Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.019159 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.029281 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.029489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.029712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.029858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.029994 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.036191 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.133454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.133549 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.133569 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.133597 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.133617 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.200086 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.206339 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:22:38 crc kubenswrapper[4955]: E1128 06:22:38.206647 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.224131 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.236045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.236123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.236145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.236170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.236189 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.242269 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.263884 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.282095 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.308231 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.330886 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.343046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.343098 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.343116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.343138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.343158 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.352218 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.371256 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.402817 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:37Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:22:36.777771 6964 factory.go:656] Stopping watch factory\\\\nI1128 06:22:36.777831 6964 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:22:36.777888 6964 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778011 6964 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778266 6964 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.791885 6964 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:22:36.791913 6964 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:22:36.791981 6964 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:36.792016 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:36.792126 6964 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.420655 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e07e1d93-911c-4b1c-92c0-aa9bd3f6d5d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec51985d66d26af8cb2598cf3681efb226637cf70d72ea7091de369dac629fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.442179 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.447341 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.447634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.448702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.448755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.448784 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.459991 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.479911 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.498646 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.539700 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.551875 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.551936 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.551985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.552017 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.552042 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.557864 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.575474 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.593955 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:38Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.654937 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.655004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.655030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.655060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.655081 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.703623 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.703756 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:38 crc kubenswrapper[4955]: E1128 06:22:38.703972 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.704029 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:38 crc kubenswrapper[4955]: E1128 06:22:38.704166 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:38 crc kubenswrapper[4955]: E1128 06:22:38.704458 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.757777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.757828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.757843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.757867 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.757886 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.861010 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.861064 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.861080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.861103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.861120 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.964476 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.964556 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.964572 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.964592 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:38 crc kubenswrapper[4955]: I1128 06:22:38.964609 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:38Z","lastTransitionTime":"2025-11-28T06:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.068161 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.068282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.068306 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.068336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.068357 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.171114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.171178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.171195 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.171219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.171240 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.274716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.274773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.274791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.274825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.274847 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.378151 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.378219 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.378237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.378259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.378276 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.481673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.481719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.481731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.481751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.481764 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.584834 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.584885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.584903 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.584929 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.584946 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.688720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.688791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.688810 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.688835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.688853 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.703323 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:39 crc kubenswrapper[4955]: E1128 06:22:39.703536 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.792077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.792157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.792182 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.792210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.792232 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.895864 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.895947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.896027 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.896065 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.896091 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.998637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.998704 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.998722 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.998748 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:39 crc kubenswrapper[4955]: I1128 06:22:39.998766 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:39Z","lastTransitionTime":"2025-11-28T06:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.101311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.101384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.101409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.101438 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.101463 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.204590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.204645 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.204663 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.204686 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.204703 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.307359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.307425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.307448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.307478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.307499 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.409884 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.409945 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.409969 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.409997 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.410020 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.512979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.513053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.513072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.513097 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.513116 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.601780 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.602032 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.601992173 +0000 UTC m=+147.191247783 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.615946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.615988 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.616004 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.616023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.616038 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703484 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703597 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703626 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703643 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703678 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.703728 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703763 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703784 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703797 4955 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703795 4955 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703836 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703880 4955 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703885 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703902 4955 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703930 4955 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703847 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.703830926 +0000 UTC m=+147.293086506 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.703980 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.704015 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.70398755 +0000 UTC m=+147.293243160 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.704043 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.704026921 +0000 UTC m=+147.293282521 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.704062 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.704052552 +0000 UTC m=+147.293308152 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 06:22:40 crc kubenswrapper[4955]: E1128 06:22:40.704078 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.720140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.720190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.720207 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.720230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.720246 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.822971 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.823024 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.823041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.823063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.823079 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.926174 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.926245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.926270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.926296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:40 crc kubenswrapper[4955]: I1128 06:22:40.926315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:40Z","lastTransitionTime":"2025-11-28T06:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.029434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.029492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.029529 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.029548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.029561 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.132218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.132282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.132305 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.132335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.132357 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.235061 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.235139 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.235157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.235178 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.235195 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.338387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.338446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.338470 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.338498 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.338552 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.441495 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.441589 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.441611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.441638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.441659 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.545002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.545050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.545068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.545094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.545110 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.648311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.648382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.648403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.648437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.648458 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.703369 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:41 crc kubenswrapper[4955]: E1128 06:22:41.703611 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.751066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.751106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.751117 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.751133 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.751147 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.853661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.853732 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.853756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.853783 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.853805 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.957037 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.957077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.957092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.957118 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:41 crc kubenswrapper[4955]: I1128 06:22:41.957135 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:41Z","lastTransitionTime":"2025-11-28T06:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.060798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.060863 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.060887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.060917 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.060945 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.164961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.165023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.165041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.165067 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.165086 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.267923 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.267990 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.268013 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.268046 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.268068 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.372210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.372278 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.372296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.372322 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.372343 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.475468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.475551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.475568 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.475590 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.475609 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.578644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.578708 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.578730 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.578758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.578780 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.653389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.653445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.653464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.653485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.653502 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.675068 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.685607 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.685657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.685674 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.685697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.685716 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.703824 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.703869 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.703852 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.704022 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.704162 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.704275 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.706030 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.710970 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.711363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.711380 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.711403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.711421 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.731710 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.736942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.737018 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.737044 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.737075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.737096 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.758229 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.763468 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.763551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.763577 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.763603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.763621 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.783819 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:42Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:42 crc kubenswrapper[4955]: E1128 06:22:42.784342 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.787709 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.787770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.787794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.787824 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.787846 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.890142 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.890190 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.890203 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.890224 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.890239 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.993387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.993487 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.993559 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.993584 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:42 crc kubenswrapper[4955]: I1128 06:22:42.993636 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:42Z","lastTransitionTime":"2025-11-28T06:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.096889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.096948 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.096968 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.096992 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.097010 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.200002 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.200054 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.200070 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.200093 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.200110 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.303030 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.303078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.303090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.303108 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.303122 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.406441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.406501 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.406564 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.406591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.406608 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.509891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.509951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.509974 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.510003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.510071 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.613099 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.613158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.613175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.613197 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.613216 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.703647 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:43 crc kubenswrapper[4955]: E1128 06:22:43.703823 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.716618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.716693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.716712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.716738 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.716756 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.819702 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.819775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.819793 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.819817 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.819837 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.922998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.923056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.923077 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.923104 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:43 crc kubenswrapper[4955]: I1128 06:22:43.923125 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:43Z","lastTransitionTime":"2025-11-28T06:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.026187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.026240 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.026262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.026288 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.026305 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.129379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.129437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.129454 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.129478 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.129496 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.232368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.232445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.232464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.232490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.232534 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.335301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.335360 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.335379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.335407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.335424 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.438673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.438720 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.438736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.438761 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.438792 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.542364 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.542439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.542463 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.542490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.542549 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.651809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.651893 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.651914 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.651961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.651980 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.704104 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.704182 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.704216 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:44 crc kubenswrapper[4955]: E1128 06:22:44.704351 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:44 crc kubenswrapper[4955]: E1128 06:22:44.704480 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:44 crc kubenswrapper[4955]: E1128 06:22:44.704622 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.755088 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.755141 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.755158 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.755181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.755200 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.858153 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.858210 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.858230 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.858256 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.858274 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.960700 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.960762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.960781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.960807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:44 crc kubenswrapper[4955]: I1128 06:22:44.960824 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:44Z","lastTransitionTime":"2025-11-28T06:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.064116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.064218 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.064238 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.064272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.064300 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.168247 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.168307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.168323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.168348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.168365 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.271278 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.271354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.271377 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.271407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.271467 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.374192 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.374241 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.374264 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.374296 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.374318 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.477202 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.477262 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.477280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.477307 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.477349 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.580746 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.580812 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.580831 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.580853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.580870 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.684160 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.684228 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.684254 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.684284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.684312 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.704108 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:45 crc kubenswrapper[4955]: E1128 06:22:45.704293 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.787758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.787816 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.787843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.787873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.787898 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.891221 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.891269 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.891283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.891301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.891315 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.995646 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.995712 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.995734 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.995758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:45 crc kubenswrapper[4955]: I1128 06:22:45.995774 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:45Z","lastTransitionTime":"2025-11-28T06:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.098621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.098678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.098689 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.098711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.098725 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.201965 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.202042 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.202061 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.202090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.202115 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.304790 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.304891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.304913 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.304941 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.304960 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.408334 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.408370 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.408381 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.408401 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.408413 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.512041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.512116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.512129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.512148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.512166 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.615440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.615523 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.615533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.615558 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.615571 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.703557 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.703612 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.703733 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:46 crc kubenswrapper[4955]: E1128 06:22:46.703835 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:46 crc kubenswrapper[4955]: E1128 06:22:46.704206 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:46 crc kubenswrapper[4955]: E1128 06:22:46.704385 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.719348 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.719404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.719416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.719437 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.719449 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.822849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.822902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.822918 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.822942 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.822961 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.926324 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.926387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.926405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.926430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:46 crc kubenswrapper[4955]: I1128 06:22:46.926447 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:46Z","lastTransitionTime":"2025-11-28T06:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.029217 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.029270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.029286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.029312 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.029329 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.132286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.132340 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.132363 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.132389 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.132407 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.234957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.235015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.235033 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.235056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.235080 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.337909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.337972 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.337991 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.338015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.338033 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.441616 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.441673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.441689 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.441713 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.441730 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.544820 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.544885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.544909 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.544943 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.544969 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.648090 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.648145 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.648164 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.648187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.648204 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.703642 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:47 crc kubenswrapper[4955]: E1128 06:22:47.703807 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.724430 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c53c974d-d870-4d7b-81e1-7655ec16e5ca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbe9d87f97fba7a2cd2cfc3d4ae39263996bf05074d82f805ab90c8d781eb9ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3590542e63e3ade61b7036c89e033662cd027ab9b2ccc69a894efb8aa7627ccb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7297ea494bb214dcd589a4cd67e8f3e331c1bd0d32808bb8eae77ee8e1b287b0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.744463 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.750762 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.750824 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.750844 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.750871 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.750891 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.762907 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://448932c3f20d58b754ae275464db20deb84e3d340f7c245d474069ca7342eb6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.785027 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dxhtm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"765bbe56-be77-4d81-824f-ad16924029f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:24Z\\\",\\\"message\\\":\\\"2025-11-28T06:21:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1\\\\n2025-11-28T06:21:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_941bbeb1-b733-4231-9ea6-cbb0012f71a1 to /host/opt/cni/bin/\\\\n2025-11-28T06:21:39Z [verbose] multus-daemon started\\\\n2025-11-28T06:21:39Z [verbose] Readiness Indicator file check\\\\n2025-11-28T06:22:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:22:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kl2w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dxhtm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.802572 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mhptq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"483773b2-23ab-4ebe-8111-f553a0c95523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmkx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mhptq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.819484 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e07e1d93-911c-4b1c-92c0-aa9bd3f6d5d7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ec51985d66d26af8cb2598cf3681efb226637cf70d72ea7091de369dac629fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9614be42e2b08663f23b609bf7a522553ddb510ba0f733232a3d4b3030068f1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.841121 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9041a88be3b07b3d769e3a95e9d5dc8a0156b09444cc2e4e8d0df253091c7f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.853946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.853997 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.854014 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.854050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.854071 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.862372 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a77ba9dc66d8008aea5f80c82631f676168e5854a2b40a08eab41733b043058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a272f2cff7121161eea671a6a83f90fbf8dca9f761b1ba000e204456360fbe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.882766 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.914549 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T06:22:37Z\\\",\\\"message\\\":\\\".io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 06:22:36.777771 6964 factory.go:656] Stopping watch factory\\\\nI1128 06:22:36.777831 6964 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 06:22:36.777888 6964 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778011 6964 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.778266 6964 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 06:22:36.791885 6964 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1128 06:22:36.791913 6964 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1128 06:22:36.791981 6964 ovnkube.go:599] Stopped ovnkube\\\\nI1128 06:22:36.792016 6964 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1128 06:22:36.792126 6964 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:22:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lt4xc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-tj8bb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.933608 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"354aa5d3-82fc-4175-9c81-477508e4e1d8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9df98e3897aa15bce012f25046f579181e6da25ed7f79d3b157c410e1e49adb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://913d610ef5c76adc8243b4d6fd9438a58725ae7a21a575b0483f0c7de093b416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e20661ccb8a8c134c10a7f97ce042ee07a35ee3977bbe209ad19db3df7af07b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95c2d65b8be10c3b6032fcbc28bd346d6b580694b2f4da1bcc273435977a459c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.955169 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c415150e-85c8-4880-805e-0bb4a4219df6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 06:21:31.130535 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 06:21:31.131465 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1530256929/tls.crt::/tmp/serving-cert-1530256929/tls.key\\\\\\\"\\\\nI1128 06:21:36.569269 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 06:21:36.571624 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 06:21:36.571638 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 06:21:36.571655 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 06:21:36.571660 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 06:21:36.575839 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1128 06:21:36.575847 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1128 06:21:36.575878 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575886 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 06:21:36.575892 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 06:21:36.575896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 06:21:36.575900 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 06:21:36.575904 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1128 06:21:36.577677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:20Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:17Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.957421 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.957466 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.957481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.957531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.957549 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:47Z","lastTransitionTime":"2025-11-28T06:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.971644 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vr4bd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85ba4360-d342-484a-a800-880080b2d0b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff27c7518c904dfbc45169fb6335b3796273ba70970074e6ad6456deb5208145\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49xk7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:37Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vr4bd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:47 crc kubenswrapper[4955]: I1128 06:22:47.989706 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad229ad8-9ea1-483d-a615-3f7d2ab408bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5c4fc904640d894bd126a2087542ef550d0e964a337752a2540c46700e1e4d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w5dxp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lmmht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:47Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.010192 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.034393 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n69rx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"308c3fbd-13df-4979-ac4a-ccd4319c48d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a999268087deec33be2f0f776aa9bf85d0315c458ac11eb71de45af834bc8d3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95a2f316315870060c672c4e4d3708525bcb6c444d438e935dcde4e71565268b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a38b90cd9005ff00fcce73956a70eff5aff4c1565cae3ef2edba9937cfa3a5b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10c3599b594dbed292e306c10d9eaff3d0c1a2b025ef185cda077164946f1110\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e9be80286db93e1273814632eb329352408b9d184bcc7c9ba56289c6aaa9df9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1596ba5cfbfb86c2f1832dc8d8625ffa8b6c8ecd09d37fec21974aa805ff95c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61be3084df530a7797c71bc9789af2580b5d1a21544bac3757d927e7a05615c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T06:21:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T06:21:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gpwbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n69rx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.054130 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qtmxm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6809f180-bdb9-4c8f-a2de-b90ac9535ed0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d0f1f4f5527b94b5382cf6fdb0c2cb54bcb14f1b2212fd3374012f4e0f5ee0da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmz6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qtmxm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.060369 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.060448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.060471 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.060500 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.060549 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.072691 4955 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19a70e1d-140d-47b9-8ad9-3555be91ba0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T06:21:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e42fe0ae8ba9093786ed80b6d0be16dbc9962c19d5f57c005b98c4c4195c0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79772f782a31e1a9509e49e73f556db489e14da15c19fe13fad041b0549ab919\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T06:21:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bmjdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T06:21:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-rsrvx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:48Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.163862 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.163916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.163933 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.163956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.163974 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.266677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.266754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.266778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.266809 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.266832 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.370644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.370710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.370728 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.370753 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.370771 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.475089 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.475152 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.475176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.475204 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.475226 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.577861 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.577934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.577951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.577979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.577999 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.680757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.680814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.680833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.680858 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.680875 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.704033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.704097 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.704124 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:48 crc kubenswrapper[4955]: E1128 06:22:48.704210 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:48 crc kubenswrapper[4955]: E1128 06:22:48.704334 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:48 crc kubenswrapper[4955]: E1128 06:22:48.704449 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.784068 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.784115 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.784155 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.784181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.784199 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.887271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.887351 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.887376 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.887408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.887429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.990433 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.990488 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.990560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.990595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:48 crc kubenswrapper[4955]: I1128 06:22:48.990616 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:48Z","lastTransitionTime":"2025-11-28T06:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.093465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.093560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.093578 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.093603 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.093624 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.197080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.197140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.197156 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.197182 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.197200 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.300022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.300116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.300140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.300176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.300199 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.403352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.403427 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.403451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.403479 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.403500 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.506441 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.506496 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.506552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.506578 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.506597 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.608768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.608833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.608857 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.608885 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.608907 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.704340 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:49 crc kubenswrapper[4955]: E1128 06:22:49.704606 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.711293 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.711354 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.711379 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.711408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.711429 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.814502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.814601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.814618 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.814642 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.814666 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.918050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.918091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.918107 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.918129 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:49 crc kubenswrapper[4955]: I1128 06:22:49.918146 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:49Z","lastTransitionTime":"2025-11-28T06:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.021733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.021794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.021811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.021833 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.021854 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.124939 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.125015 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.125039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.125067 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.125093 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.228001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.228063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.228086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.228116 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.228140 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.330754 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.330878 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.330897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.330927 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.330943 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.433993 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.434060 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.434078 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.434102 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.434120 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.537119 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.537194 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.537222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.537253 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.537279 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.640337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.640393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.640409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.640467 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.640487 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.704210 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:50 crc kubenswrapper[4955]: E1128 06:22:50.704426 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.704210 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.704256 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:50 crc kubenswrapper[4955]: E1128 06:22:50.704658 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:50 crc kubenswrapper[4955]: E1128 06:22:50.704745 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.742439 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.742502 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.742570 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.742596 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.742615 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.846539 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.846610 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.846629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.846654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.846678 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.949183 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.949259 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.949285 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.949315 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:50 crc kubenswrapper[4955]: I1128 06:22:50.949337 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:50Z","lastTransitionTime":"2025-11-28T06:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.051721 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.051770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.051787 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.051808 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.051824 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.155486 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.155625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.155649 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.155683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.155708 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.258623 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.258697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.258721 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.258755 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.258778 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.362023 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.362091 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.362109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.362135 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.362152 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.465171 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.465246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.465270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.465301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.465328 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.568357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.568405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.568423 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.568445 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.568463 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.671672 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.671737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.671756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.671782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.671799 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.704403 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:51 crc kubenswrapper[4955]: E1128 06:22:51.704951 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.724058 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.774668 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.774737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.774758 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.774783 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.774802 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.877918 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.877985 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.878007 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.878035 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.878056 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.981388 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.981446 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.981465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.981494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:51 crc kubenswrapper[4955]: I1128 06:22:51.981549 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:51Z","lastTransitionTime":"2025-11-28T06:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.084272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.084335 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.084357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.084387 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.084411 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.187697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.187760 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.187779 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.187804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.187820 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.290764 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.290843 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.290867 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.290899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.290921 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.394568 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.394638 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.394650 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.394669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.394681 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.498063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.498109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.498126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.498148 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.498168 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.601242 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.601275 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.601286 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.601302 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.601314 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.703680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.703762 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.703920 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.703960 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.704093 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.704217 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.704237 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.704310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.704329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.704356 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.704378 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.705382 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.705671 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.807806 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.807865 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.807882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.807910 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.807928 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.815743 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.815832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.815851 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.815870 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.815886 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.838322 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.844216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.844289 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.844310 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.844343 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.844365 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.869456 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.875902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.875956 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.875978 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.876006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.876030 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.896744 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.903397 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.903465 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.903489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.903548 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.903573 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.925194 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.930346 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.930402 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.930460 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.930492 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.930543 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.951986 4955 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T06:22:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c8724b23-f7a1-4f7c-bb6a-5c302bc97241\\\",\\\"systemUUID\\\":\\\"3d14fd8f-8a80-4dfe-b670-badbf9b65f7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T06:22:52Z is after 2025-08-24T17:21:41Z" Nov 28 06:22:52 crc kubenswrapper[4955]: E1128 06:22:52.952213 4955 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.954695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.954751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.954768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.954794 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:52 crc kubenswrapper[4955]: I1128 06:22:52.954812 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:52Z","lastTransitionTime":"2025-11-28T06:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.058489 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.058599 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.058651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.058677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.058697 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.161474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.161657 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.161682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.161711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.161732 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.264661 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.264765 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.264785 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.264813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.264833 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.367778 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.367849 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.367873 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.367902 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.367928 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.471547 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.471606 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.471623 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.471651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.471669 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.574651 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.574724 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.574745 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.574770 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.574789 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.678319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.678393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.678409 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.678434 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.678451 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.703935 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:53 crc kubenswrapper[4955]: E1128 06:22:53.704156 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.781295 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.781384 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.781398 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.781424 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.781444 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.884396 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.884451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.884490 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.884528 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.884541 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.987636 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.987697 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.987714 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.987737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:53 crc kubenswrapper[4955]: I1128 06:22:53.987755 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:53Z","lastTransitionTime":"2025-11-28T06:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.091063 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.091131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.091150 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.091175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.091193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.193915 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.193998 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.194022 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.194097 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.194122 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.297126 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.297201 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.297222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.297257 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.297281 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.400193 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.400268 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.400287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.400311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.400329 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.503536 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.503581 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.503595 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.503629 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.503639 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.607736 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.607804 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.607825 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.607854 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.607876 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.703652 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.703777 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.703833 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:54 crc kubenswrapper[4955]: E1128 06:22:54.704213 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:54 crc kubenswrapper[4955]: E1128 06:22:54.704143 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:54 crc kubenswrapper[4955]: E1128 06:22:54.704417 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.711947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.712039 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.712066 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.712105 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.712143 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.814897 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.814961 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.814980 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.815006 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.815023 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.917691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.917766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.917788 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.917860 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:54 crc kubenswrapper[4955]: I1128 06:22:54.917888 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:54Z","lastTransitionTime":"2025-11-28T06:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.022876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.023123 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.023149 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.023175 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.023194 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.126729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.126797 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.126813 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.126838 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.126855 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.230239 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.230304 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.230323 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.230350 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.230375 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.333337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.333399 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.333416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.333440 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.333459 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.436707 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.436784 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.436811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.436840 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.436864 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.539695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.539757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.539773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.539798 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.539817 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.642741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.642795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.642811 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.642835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.642856 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.704040 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:55 crc kubenswrapper[4955]: E1128 06:22:55.704256 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.745429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.745555 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.745621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.745654 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.745674 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.848682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.848749 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.848766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.848792 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.848810 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.881399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:55 crc kubenswrapper[4955]: E1128 06:22:55.881711 4955 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:55 crc kubenswrapper[4955]: E1128 06:22:55.881818 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs podName:483773b2-23ab-4ebe-8111-f553a0c95523 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:59.881789609 +0000 UTC m=+162.471045219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs") pod "network-metrics-daemon-mhptq" (UID: "483773b2-23ab-4ebe-8111-f553a0c95523") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.952176 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.952234 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.952251 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.952274 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:55 crc kubenswrapper[4955]: I1128 06:22:55.952291 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:55Z","lastTransitionTime":"2025-11-28T06:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.055957 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.056026 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.056050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.056080 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.056102 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.158647 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.158742 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.158771 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.158802 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.158824 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.262337 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.262414 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.262435 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.262464 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.262482 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.366311 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.366404 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.366422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.366448 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.366468 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.469655 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.469729 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.469996 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.470041 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.470068 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.573711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.573776 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.573795 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.573818 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.573837 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.677279 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.677405 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.677427 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.677451 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.677467 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.704127 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.704187 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.704376 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:56 crc kubenswrapper[4955]: E1128 06:22:56.704669 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:56 crc kubenswrapper[4955]: E1128 06:22:56.704755 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:56 crc kubenswrapper[4955]: E1128 06:22:56.704874 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.780552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.780611 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.780627 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.780653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.780672 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.883966 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.884092 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.884122 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.884153 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.884177 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.987321 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.987485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.987565 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.987591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:56 crc kubenswrapper[4955]: I1128 06:22:56.987609 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:56Z","lastTransitionTime":"2025-11-28T06:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.090594 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.090670 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.090690 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.090716 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.090733 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.194221 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.194284 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.194301 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.194326 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.194343 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.297246 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.297300 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.297316 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.297338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.297356 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.399768 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.399832 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.399852 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.399876 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.399895 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.502338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.502407 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.502425 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.502450 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.502469 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.605486 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.605620 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.605655 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.605691 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.605713 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.703813 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:57 crc kubenswrapper[4955]: E1128 06:22:57.704172 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.710009 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.710086 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.710110 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.710140 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.710166 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.765465 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=53.76543825 podStartE2EDuration="53.76543825s" podCreationTimestamp="2025-11-28 06:22:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.740060869 +0000 UTC m=+100.329316499" watchObservedRunningTime="2025-11-28 06:22:57.76543825 +0000 UTC m=+100.354693850" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.785752 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=81.785684512 podStartE2EDuration="1m21.785684512s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.765231245 +0000 UTC m=+100.354486885" watchObservedRunningTime="2025-11-28 06:22:57.785684512 +0000 UTC m=+100.374940122" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.786004 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vr4bd" podStartSLOduration=80.78599356 podStartE2EDuration="1m20.78599356s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.779847723 +0000 UTC m=+100.369103353" watchObservedRunningTime="2025-11-28 06:22:57.78599356 +0000 UTC m=+100.375249160" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.800085 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podStartSLOduration=80.800059723 podStartE2EDuration="1m20.800059723s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.79738376 +0000 UTC m=+100.386639370" watchObservedRunningTime="2025-11-28 06:22:57.800059723 +0000 UTC m=+100.389315323" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.812626 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.812695 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.812718 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.812747 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.812773 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.868876 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n69rx" podStartSLOduration=80.868847946 podStartE2EDuration="1m20.868847946s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.850672181 +0000 UTC m=+100.439927821" watchObservedRunningTime="2025-11-28 06:22:57.868847946 +0000 UTC m=+100.458103556" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.869471 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qtmxm" podStartSLOduration=80.869455713 podStartE2EDuration="1m20.869455713s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.867429248 +0000 UTC m=+100.456684818" watchObservedRunningTime="2025-11-28 06:22:57.869455713 +0000 UTC m=+100.458711323" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.907312 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=77.907286873 podStartE2EDuration="1m17.907286873s" podCreationTimestamp="2025-11-28 06:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.907153569 +0000 UTC m=+100.496409149" watchObservedRunningTime="2025-11-28 06:22:57.907286873 +0000 UTC m=+100.496542483" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.908068 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-rsrvx" podStartSLOduration=80.908056984 podStartE2EDuration="1m20.908056984s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.886562169 +0000 UTC m=+100.475817749" watchObservedRunningTime="2025-11-28 06:22:57.908056984 +0000 UTC m=+100.497312594" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.916103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.916157 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.916170 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.916187 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.916207 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:57Z","lastTransitionTime":"2025-11-28T06:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.960009 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dxhtm" podStartSLOduration=80.959982278 podStartE2EDuration="1m20.959982278s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.956576045 +0000 UTC m=+100.545831625" watchObservedRunningTime="2025-11-28 06:22:57.959982278 +0000 UTC m=+100.549237878" Nov 28 06:22:57 crc kubenswrapper[4955]: I1128 06:22:57.985297 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=20.985280817 podStartE2EDuration="20.985280817s" podCreationTimestamp="2025-11-28 06:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:57.985126292 +0000 UTC m=+100.574381862" watchObservedRunningTime="2025-11-28 06:22:57.985280817 +0000 UTC m=+100.574536387" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.018533 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.018567 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.018579 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.018594 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.018605 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.020717 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.020703381 podStartE2EDuration="7.020703381s" podCreationTimestamp="2025-11-28 06:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:22:58.02030237 +0000 UTC m=+100.609557960" watchObservedRunningTime="2025-11-28 06:22:58.020703381 +0000 UTC m=+100.609958951" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.121416 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.121494 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.121551 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.121583 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.121612 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.224327 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.224362 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.224371 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.224385 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.224395 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.326693 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.326757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.326773 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.326797 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.326814 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.429601 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.429673 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.429699 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.429731 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.429750 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.536181 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.536231 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.536245 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.536263 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.536274 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.639619 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.639711 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.639737 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.639766 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.639803 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.703887 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.703967 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.703898 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:22:58 crc kubenswrapper[4955]: E1128 06:22:58.704097 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:22:58 crc kubenswrapper[4955]: E1128 06:22:58.704249 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:22:58 crc kubenswrapper[4955]: E1128 06:22:58.704360 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.742457 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.742549 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.742574 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.742602 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.742620 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.845552 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.845634 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.845665 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.845696 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.845717 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.948975 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.949053 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.949076 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.949106 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:58 crc kubenswrapper[4955]: I1128 06:22:58.949128 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:58Z","lastTransitionTime":"2025-11-28T06:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.052958 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.053020 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.053056 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.053085 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.053108 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.156777 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.156835 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.156853 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.156877 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.156898 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.259879 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.259934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.259951 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.259976 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.259993 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.363001 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.363057 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.363074 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.363096 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.363113 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.466007 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.466109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.466138 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.466168 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.466193 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.569283 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.569336 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.569352 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.569393 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.569410 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.672282 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.672345 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.672366 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.672394 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.672416 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.704217 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:22:59 crc kubenswrapper[4955]: E1128 06:22:59.704502 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.775408 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.775485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.775531 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.775557 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.775577 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.878591 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.878637 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.878653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.878677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.878694 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.981733 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.981807 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.981826 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.981852 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:22:59 crc kubenswrapper[4955]: I1128 06:22:59.981870 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:22:59Z","lastTransitionTime":"2025-11-28T06:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.084270 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.084338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.084361 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.084391 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.084452 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.187805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.187881 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.187904 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.187932 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.187954 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.291678 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.291757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.291782 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.291814 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.291834 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.394979 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.395076 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.395094 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.395120 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.395138 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.498287 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.498359 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.498382 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.498415 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.498443 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.601887 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.602045 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.602072 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.602103 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.602127 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.703426 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:00 crc kubenswrapper[4955]: E1128 06:23:00.703656 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.703723 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:00 crc kubenswrapper[4955]: E1128 06:23:00.703908 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.704009 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:00 crc kubenswrapper[4955]: E1128 06:23:00.704212 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.705791 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.705859 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.705882 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.705911 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.705929 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.808222 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.808319 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.808338 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.808368 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.808391 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.911866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.911931 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.911947 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.911973 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:00 crc kubenswrapper[4955]: I1128 06:23:00.911990 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:00Z","lastTransitionTime":"2025-11-28T06:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.014934 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.015025 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.015050 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.015081 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.015105 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.117216 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.117272 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.117280 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.117292 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.117301 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.220075 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.220131 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.220163 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.220185 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.220200 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.323357 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.323431 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.323449 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.323474 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.323492 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.426653 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.426710 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.426728 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.426751 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.426767 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.530453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.530588 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.530621 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.530652 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.530673 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.633805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.633866 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.633889 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.633916 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.633936 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.704296 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:01 crc kubenswrapper[4955]: E1128 06:23:01.704591 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.736577 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.736703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.736726 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.736756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.736780 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.840669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.840740 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.840775 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.840803 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.840829 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.944430 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.944485 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.944535 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.944560 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:01 crc kubenswrapper[4955]: I1128 06:23:01.944576 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:01Z","lastTransitionTime":"2025-11-28T06:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.047644 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.047703 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.047719 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.047742 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.047761 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.150891 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.150953 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.150977 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.151005 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.151026 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.254630 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.254682 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.254698 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.254725 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.254745 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.358329 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.358403 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.358422 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.358447 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.358465 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.461683 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.461786 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.461805 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.461828 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.461844 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.564358 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.564429 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.564453 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.564481 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.564502 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.667677 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.667741 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.667757 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.667781 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.667798 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.703998 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.704033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:02 crc kubenswrapper[4955]: E1128 06:23:02.704196 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.704280 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:02 crc kubenswrapper[4955]: E1128 06:23:02.704483 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:02 crc kubenswrapper[4955]: E1128 06:23:02.704617 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.772003 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.772065 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.772083 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.772109 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.772128 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.874756 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.874899 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.874921 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.874946 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.874963 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.978130 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.978221 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.978243 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.978271 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:02 crc kubenswrapper[4955]: I1128 06:23:02.978291 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:02Z","lastTransitionTime":"2025-11-28T06:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.081114 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.081165 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.081184 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.081208 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.081228 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:03Z","lastTransitionTime":"2025-11-28T06:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.137579 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.137625 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.137643 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.137669 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.137687 4955 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T06:23:03Z","lastTransitionTime":"2025-11-28T06:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.210380 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl"] Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.210930 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.214585 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.214639 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.214683 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.214628 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.289647 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e923520-7c4b-486b-869c-7f13d1273f13-service-ca\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.289754 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.289904 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e923520-7c4b-486b-869c-7f13d1273f13-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.290057 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.290132 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e923520-7c4b-486b-869c-7f13d1273f13-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.391758 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e923520-7c4b-486b-869c-7f13d1273f13-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.391843 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e923520-7c4b-486b-869c-7f13d1273f13-service-ca\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.391949 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.392018 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e923520-7c4b-486b-869c-7f13d1273f13-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.392088 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.392211 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.392291 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8e923520-7c4b-486b-869c-7f13d1273f13-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.393387 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e923520-7c4b-486b-869c-7f13d1273f13-service-ca\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.409610 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e923520-7c4b-486b-869c-7f13d1273f13-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.423454 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e923520-7c4b-486b-869c-7f13d1273f13-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-459kl\" (UID: \"8e923520-7c4b-486b-869c-7f13d1273f13\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.536377 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" Nov 28 06:23:03 crc kubenswrapper[4955]: W1128 06:23:03.562211 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e923520_7c4b_486b_869c_7f13d1273f13.slice/crio-330f9ceb16ddab2db2504d4e4d10f389453f658b5e54aa86dbe5c081afcaa9c6 WatchSource:0}: Error finding container 330f9ceb16ddab2db2504d4e4d10f389453f658b5e54aa86dbe5c081afcaa9c6: Status 404 returned error can't find the container with id 330f9ceb16ddab2db2504d4e4d10f389453f658b5e54aa86dbe5c081afcaa9c6 Nov 28 06:23:03 crc kubenswrapper[4955]: I1128 06:23:03.704134 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:03 crc kubenswrapper[4955]: E1128 06:23:03.704789 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.308449 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" event={"ID":"8e923520-7c4b-486b-869c-7f13d1273f13","Type":"ContainerStarted","Data":"07eabf3eac8ce893280eaec8aebae41ca6de104e63a54204cb31afa1bbd938ce"} Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.308496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" event={"ID":"8e923520-7c4b-486b-869c-7f13d1273f13","Type":"ContainerStarted","Data":"330f9ceb16ddab2db2504d4e4d10f389453f658b5e54aa86dbe5c081afcaa9c6"} Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.337160 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-459kl" podStartSLOduration=87.337134026 podStartE2EDuration="1m27.337134026s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:04.335409789 +0000 UTC m=+106.924665449" watchObservedRunningTime="2025-11-28 06:23:04.337134026 +0000 UTC m=+106.926389626" Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.703840 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.703941 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:04 crc kubenswrapper[4955]: I1128 06:23:04.703976 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:04 crc kubenswrapper[4955]: E1128 06:23:04.704085 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:04 crc kubenswrapper[4955]: E1128 06:23:04.704718 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:04 crc kubenswrapper[4955]: E1128 06:23:04.704837 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:05 crc kubenswrapper[4955]: I1128 06:23:05.703743 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:05 crc kubenswrapper[4955]: E1128 06:23:05.704132 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:06 crc kubenswrapper[4955]: I1128 06:23:06.703782 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:06 crc kubenswrapper[4955]: I1128 06:23:06.703806 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:06 crc kubenswrapper[4955]: I1128 06:23:06.703862 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:06 crc kubenswrapper[4955]: E1128 06:23:06.704347 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:06 crc kubenswrapper[4955]: E1128 06:23:06.704471 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:06 crc kubenswrapper[4955]: E1128 06:23:06.704609 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:06 crc kubenswrapper[4955]: I1128 06:23:06.704662 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:23:06 crc kubenswrapper[4955]: E1128 06:23:06.704837 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-tj8bb_openshift-ovn-kubernetes(9e192dfd-62ad-4870-b2fd-3c2a09006f6f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" Nov 28 06:23:07 crc kubenswrapper[4955]: I1128 06:23:07.704045 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:07 crc kubenswrapper[4955]: E1128 06:23:07.705183 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:08 crc kubenswrapper[4955]: I1128 06:23:08.704016 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:08 crc kubenswrapper[4955]: I1128 06:23:08.704030 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:08 crc kubenswrapper[4955]: E1128 06:23:08.704324 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:08 crc kubenswrapper[4955]: E1128 06:23:08.704406 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:08 crc kubenswrapper[4955]: I1128 06:23:08.704798 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:08 crc kubenswrapper[4955]: E1128 06:23:08.705008 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:09 crc kubenswrapper[4955]: I1128 06:23:09.704174 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:09 crc kubenswrapper[4955]: E1128 06:23:09.704399 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:10 crc kubenswrapper[4955]: I1128 06:23:10.704234 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:10 crc kubenswrapper[4955]: I1128 06:23:10.704325 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:10 crc kubenswrapper[4955]: E1128 06:23:10.704426 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:10 crc kubenswrapper[4955]: I1128 06:23:10.704354 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:10 crc kubenswrapper[4955]: E1128 06:23:10.704555 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:10 crc kubenswrapper[4955]: E1128 06:23:10.704754 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.339200 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/1.log" Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.339961 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/0.log" Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.340037 4955 generic.go:334] "Generic (PLEG): container finished" podID="765bbe56-be77-4d81-824f-ad16924029f4" containerID="a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3" exitCode=1 Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.340078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerDied","Data":"a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3"} Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.340124 4955 scope.go:117] "RemoveContainer" containerID="96b9c34c2354a7e0ab3bf5c6b6056fc5ec4582dd902046de93512534ae8d98c5" Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.340812 4955 scope.go:117] "RemoveContainer" containerID="a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3" Nov 28 06:23:11 crc kubenswrapper[4955]: E1128 06:23:11.341154 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dxhtm_openshift-multus(765bbe56-be77-4d81-824f-ad16924029f4)\"" pod="openshift-multus/multus-dxhtm" podUID="765bbe56-be77-4d81-824f-ad16924029f4" Nov 28 06:23:11 crc kubenswrapper[4955]: I1128 06:23:11.703407 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:11 crc kubenswrapper[4955]: E1128 06:23:11.703667 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:12 crc kubenswrapper[4955]: I1128 06:23:12.347133 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/1.log" Nov 28 06:23:12 crc kubenswrapper[4955]: I1128 06:23:12.703457 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:12 crc kubenswrapper[4955]: I1128 06:23:12.703468 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:12 crc kubenswrapper[4955]: I1128 06:23:12.703491 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:12 crc kubenswrapper[4955]: E1128 06:23:12.703692 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:12 crc kubenswrapper[4955]: E1128 06:23:12.703863 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:12 crc kubenswrapper[4955]: E1128 06:23:12.703954 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:13 crc kubenswrapper[4955]: I1128 06:23:13.704025 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:13 crc kubenswrapper[4955]: E1128 06:23:13.704606 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:14 crc kubenswrapper[4955]: I1128 06:23:14.703494 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:14 crc kubenswrapper[4955]: I1128 06:23:14.703615 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:14 crc kubenswrapper[4955]: I1128 06:23:14.703812 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:14 crc kubenswrapper[4955]: E1128 06:23:14.703877 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:14 crc kubenswrapper[4955]: E1128 06:23:14.703804 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:14 crc kubenswrapper[4955]: E1128 06:23:14.703636 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:15 crc kubenswrapper[4955]: I1128 06:23:15.704406 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:15 crc kubenswrapper[4955]: E1128 06:23:15.704635 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:16 crc kubenswrapper[4955]: I1128 06:23:16.703600 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:16 crc kubenswrapper[4955]: I1128 06:23:16.703656 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:16 crc kubenswrapper[4955]: I1128 06:23:16.703656 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:16 crc kubenswrapper[4955]: E1128 06:23:16.703779 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:16 crc kubenswrapper[4955]: E1128 06:23:16.704125 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:16 crc kubenswrapper[4955]: E1128 06:23:16.704342 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:17 crc kubenswrapper[4955]: I1128 06:23:17.703456 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:17 crc kubenswrapper[4955]: E1128 06:23:17.708964 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:17 crc kubenswrapper[4955]: E1128 06:23:17.740593 4955 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 06:23:17 crc kubenswrapper[4955]: E1128 06:23:17.800902 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:23:18 crc kubenswrapper[4955]: I1128 06:23:18.704305 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:18 crc kubenswrapper[4955]: E1128 06:23:18.704489 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:18 crc kubenswrapper[4955]: I1128 06:23:18.704912 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:18 crc kubenswrapper[4955]: E1128 06:23:18.705047 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:18 crc kubenswrapper[4955]: I1128 06:23:18.705378 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:18 crc kubenswrapper[4955]: E1128 06:23:18.705605 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:19 crc kubenswrapper[4955]: I1128 06:23:19.704386 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:19 crc kubenswrapper[4955]: E1128 06:23:19.704636 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:20 crc kubenswrapper[4955]: I1128 06:23:20.703954 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:20 crc kubenswrapper[4955]: I1128 06:23:20.703990 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:20 crc kubenswrapper[4955]: E1128 06:23:20.704101 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:20 crc kubenswrapper[4955]: E1128 06:23:20.704352 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:20 crc kubenswrapper[4955]: I1128 06:23:20.703954 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:20 crc kubenswrapper[4955]: E1128 06:23:20.704817 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:21 crc kubenswrapper[4955]: I1128 06:23:21.703999 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:21 crc kubenswrapper[4955]: E1128 06:23:21.704249 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:21 crc kubenswrapper[4955]: I1128 06:23:21.705424 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.388276 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.391711 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerStarted","Data":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.392281 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.676904 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podStartSLOduration=105.676877712 podStartE2EDuration="1m45.676877712s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:22.442540341 +0000 UTC m=+125.031795951" watchObservedRunningTime="2025-11-28 06:23:22.676877712 +0000 UTC m=+125.266133322" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.678189 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mhptq"] Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.678325 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:22 crc kubenswrapper[4955]: E1128 06:23:22.678459 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.704099 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.704093 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:22 crc kubenswrapper[4955]: I1128 06:23:22.704182 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:22 crc kubenswrapper[4955]: E1128 06:23:22.704226 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:22 crc kubenswrapper[4955]: E1128 06:23:22.704384 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:22 crc kubenswrapper[4955]: E1128 06:23:22.704434 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:22 crc kubenswrapper[4955]: E1128 06:23:22.801846 4955 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 06:23:23 crc kubenswrapper[4955]: I1128 06:23:23.703965 4955 scope.go:117] "RemoveContainer" containerID="a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3" Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.404465 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/1.log" Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.404951 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerStarted","Data":"7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd"} Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.703365 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:24 crc kubenswrapper[4955]: E1128 06:23:24.703850 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.703467 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.703570 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:24 crc kubenswrapper[4955]: I1128 06:23:24.703445 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:24 crc kubenswrapper[4955]: E1128 06:23:24.704121 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:24 crc kubenswrapper[4955]: E1128 06:23:24.703958 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:24 crc kubenswrapper[4955]: E1128 06:23:24.704287 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:26 crc kubenswrapper[4955]: I1128 06:23:26.703638 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:26 crc kubenswrapper[4955]: I1128 06:23:26.703733 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:26 crc kubenswrapper[4955]: I1128 06:23:26.703733 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:26 crc kubenswrapper[4955]: E1128 06:23:26.703818 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 06:23:26 crc kubenswrapper[4955]: E1128 06:23:26.704007 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 06:23:26 crc kubenswrapper[4955]: I1128 06:23:26.704020 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:26 crc kubenswrapper[4955]: E1128 06:23:26.704209 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mhptq" podUID="483773b2-23ab-4ebe-8111-f553a0c95523" Nov 28 06:23:26 crc kubenswrapper[4955]: E1128 06:23:26.704365 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.704372 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.704438 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.704454 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.704444 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.707920 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.708385 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.708464 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.708389 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.710438 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:23:28 crc kubenswrapper[4955]: I1128 06:23:28.710528 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.262600 4955 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.305830 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.306219 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: W1128 06:23:35.310855 4955 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 28 06:23:35 crc kubenswrapper[4955]: E1128 06:23:35.310924 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.310968 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.310991 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.311053 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:23:35 crc kubenswrapper[4955]: W1128 06:23:35.310957 4955 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 28 06:23:35 crc kubenswrapper[4955]: E1128 06:23:35.311156 4955 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.312673 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.320125 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c6dx"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.320567 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mg445"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.321347 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.327116 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.343475 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.343550 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.343787 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.344047 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.344302 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.344578 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345095 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345096 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345186 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345305 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345316 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345345 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345628 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.345671 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.346000 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.346386 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.346386 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.346678 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.347878 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.349532 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.366537 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.366883 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.367254 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.367420 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.367592 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.367740 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.367958 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.368144 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.368895 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.369294 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.369446 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.369725 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.369750 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.370098 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.370207 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.370494 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.370999 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.371068 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.371221 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.371230 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.371498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.372556 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k8wp6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.373163 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.378028 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.379310 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-8pl8k"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.379835 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.380298 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.381482 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.381695 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.381757 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.381901 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.382000 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.382024 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.382151 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.382201 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.382326 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.385700 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.386104 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.386588 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.386973 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.388903 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.389439 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.389639 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.389783 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.389931 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7vbtz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.390371 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.391095 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.395716 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.396112 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj5n6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.396386 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.396780 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.397328 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.397672 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.399588 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.400130 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.400785 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401015 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401133 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401180 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401347 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401452 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401577 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401139 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.401757 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.402140 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.402600 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.403141 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.403770 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.403931 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.404086 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.404239 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.404293 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.404658 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.404919 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.405019 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.405230 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.405447 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.415558 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.416499 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.416773 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417080 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417204 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417362 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417632 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qhfdt"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417693 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417711 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417887 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.417919 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418238 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-config\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418710 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418730 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418751 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvrw\" (UniqueName: \"kubernetes.io/projected/0d833f53-a5d1-47ea-ab5d-77bee61787fe-kube-api-access-xwvrw\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418768 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jh8\" (UniqueName: \"kubernetes.io/projected/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-kube-api-access-p9jh8\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418794 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418811 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418830 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d833f53-a5d1-47ea-ab5d-77bee61787fe-serving-cert\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418870 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-node-pullsecrets\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418950 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418975 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37176422-d3bf-429f-af47-8dd4e135b40b-audit-dir\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.418994 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wbk2\" (UniqueName: \"kubernetes.io/projected/a7ea6110-fef8-49d3-9f79-8d6da21e8091-kube-api-access-5wbk2\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419014 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-config\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419031 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-client\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419060 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419075 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-audit-policies\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419093 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419127 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419146 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419161 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419180 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ea6110-fef8-49d3-9f79-8d6da21e8091-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419197 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-serving-cert\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-auth-proxy-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419377 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-machine-approver-tls\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419418 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit-dir\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419440 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tld\" (UniqueName: \"kubernetes.io/projected/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-kube-api-access-f4tld\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419487 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419525 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419559 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-serving-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419579 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-encryption-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419597 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbvks\" (UniqueName: \"kubernetes.io/projected/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-kube-api-access-hbvks\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419654 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419682 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419702 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419719 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419742 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419775 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-encryption-config\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419794 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419873 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a7ea6110-fef8-49d3-9f79-8d6da21e8091-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419895 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-images\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419912 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlg5q\" (UniqueName: \"kubernetes.io/projected/37176422-d3bf-429f-af47-8dd4e135b40b-kube-api-access-hlg5q\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419943 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419957 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.419981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmspc\" (UniqueName: \"kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420031 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-trusted-ca\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420057 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7x5\" (UniqueName: \"kubernetes.io/projected/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-kube-api-access-mg7x5\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420085 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-serving-cert\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420106 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420127 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420158 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420205 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgn4\" (UniqueName: \"kubernetes.io/projected/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-kube-api-access-rzgn4\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420225 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-client\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420247 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcgfm\" (UniqueName: \"kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420279 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/44a739c5-de17-458b-ab79-74c4bd74a43b-kube-api-access-b45bd\") pod \"downloads-7954f5f757-8pl8k\" (UID: \"44a739c5-de17-458b-ab79-74c4bd74a43b\") " pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420299 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pvbp\" (UniqueName: \"kubernetes.io/projected/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-kube-api-access-4pvbp\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420336 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-image-import-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420542 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420564 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420580 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420601 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420657 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qt8g\" (UniqueName: \"kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420701 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-config\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-serving-cert\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.420778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.423100 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.423386 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.423609 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.424351 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.426314 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.431387 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.432163 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.432531 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.437571 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.440549 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.440474 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.443359 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.443897 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.443945 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.469268 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.470065 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.470281 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.470425 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.470735 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.471051 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.471146 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.477396 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.478062 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.480555 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.482310 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.483609 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.486391 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.487786 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.494587 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.496521 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.503924 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.504227 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.504929 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.505394 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.505787 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.508090 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-466vn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.508901 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.509523 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.509626 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.510120 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.513262 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.517174 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mg445"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.517281 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pw7x8"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.535818 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.536677 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.537464 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.513662 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.538319 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.538663 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.538743 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.538937 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.539962 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540638 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/44a739c5-de17-458b-ab79-74c4bd74a43b-kube-api-access-b45bd\") pod \"downloads-7954f5f757-8pl8k\" (UID: \"44a739c5-de17-458b-ab79-74c4bd74a43b\") " pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540674 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pvbp\" (UniqueName: \"kubernetes.io/projected/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-kube-api-access-4pvbp\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540720 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540809 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca74d299-d21d-4169-adde-500339ec6876-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-image-import-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540936 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540962 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.540988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qt8g\" (UniqueName: \"kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541016 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-config\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541045 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541075 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541103 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-serving-cert\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541126 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541156 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-config\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541215 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvrw\" (UniqueName: \"kubernetes.io/projected/0d833f53-a5d1-47ea-ab5d-77bee61787fe-kube-api-access-xwvrw\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9jh8\" (UniqueName: \"kubernetes.io/projected/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-kube-api-access-p9jh8\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541267 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541304 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541336 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541368 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-node-pullsecrets\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541418 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37176422-d3bf-429f-af47-8dd4e135b40b-audit-dir\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541447 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d833f53-a5d1-47ea-ab5d-77bee61787fe-serving-cert\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541479 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wbk2\" (UniqueName: \"kubernetes.io/projected/a7ea6110-fef8-49d3-9f79-8d6da21e8091-kube-api-access-5wbk2\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541501 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-config\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541555 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-client\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541583 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.541613 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1db2797b-53f3-4ccd-b212-9d5e3120820c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542027 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542067 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-audit-policies\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542128 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542159 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542200 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca74d299-d21d-4169-adde-500339ec6876-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542233 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ea6110-fef8-49d3-9f79-8d6da21e8091-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-serving-cert\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542293 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-auth-proxy-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542323 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-machine-approver-tls\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542357 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tld\" (UniqueName: \"kubernetes.io/projected/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-kube-api-access-f4tld\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542384 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit-dir\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542447 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542471 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-serving-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542496 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-encryption-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbvks\" (UniqueName: \"kubernetes.io/projected/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-kube-api-access-hbvks\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542582 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tc2\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-kube-api-access-44tc2\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542652 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26vzb\" (UniqueName: \"kubernetes.io/projected/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-kube-api-access-26vzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542719 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542751 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542783 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542846 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542875 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-encryption-config\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542902 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.542905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543102 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543136 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a7ea6110-fef8-49d3-9f79-8d6da21e8091-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543159 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlg5q\" (UniqueName: \"kubernetes.io/projected/37176422-d3bf-429f-af47-8dd4e135b40b-kube-api-access-hlg5q\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543189 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543210 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543228 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmspc\" (UniqueName: \"kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543250 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-images\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543276 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1db2797b-53f3-4ccd-b212-9d5e3120820c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543300 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7x5\" (UniqueName: \"kubernetes.io/projected/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-kube-api-access-mg7x5\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543323 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-serving-cert\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543339 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-trusted-ca\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543358 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543378 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543400 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543437 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543470 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543524 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca74d299-d21d-4169-adde-500339ec6876-config\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543545 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-client\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543570 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcgfm\" (UniqueName: \"kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.543596 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzgn4\" (UniqueName: \"kubernetes.io/projected/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-kube-api-access-rzgn4\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.544275 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.544674 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-auth-proxy-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.545309 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.545367 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/37176422-d3bf-429f-af47-8dd4e135b40b-audit-dir\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.545378 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.546046 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.546776 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.547107 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.547482 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.547739 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.554971 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit-dir\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.555261 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m27qh"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.558066 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.557065 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.558377 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-config\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.561992 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-audit\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.563664 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-serving-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.564409 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-node-pullsecrets\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.565195 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.565315 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-images\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.565339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.555784 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a7ea6110-fef8-49d3-9f79-8d6da21e8091-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.565850 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.565952 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.566796 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.567230 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-machine-approver-tls\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.567339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.567532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.567657 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hg4kn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.567808 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-trusted-ca\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-client\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568101 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d833f53-a5d1-47ea-ab5d-77bee61787fe-serving-cert\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568137 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568183 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-audit-policies\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-serving-cert\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568527 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.568737 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.569266 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.569990 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.570057 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.570620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-config\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.571689 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-service-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.572610 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/37176422-d3bf-429f-af47-8dd4e135b40b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.572621 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-encryption-config\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.572907 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.572989 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.573128 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.573726 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574084 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a7ea6110-fef8-49d3-9f79-8d6da21e8091-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574094 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-image-import-ca\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574223 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-serving-cert\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574365 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574378 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-etcd-client\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.574777 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.577305 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-config\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.577451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d833f53-a5d1-47ea-ab5d-77bee61787fe-config\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.577665 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.577684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.577916 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-encryption-config\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578198 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37176422-d3bf-429f-af47-8dd4e135b40b-serving-cert\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578617 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578692 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c6dx"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578709 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578748 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.578761 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.579665 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.579878 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.581434 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-vlq67"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.582924 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.583364 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kmfw2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.583774 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.583871 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.584666 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.585288 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.586552 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.589308 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8pl8k"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.594555 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k8wp6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.595630 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.597162 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.599103 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.599583 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.601253 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7vbtz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.607534 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj5n6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.609053 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.611065 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-466vn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.611653 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.612282 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.616293 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.617693 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.619079 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qhfdt"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.620229 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd6qn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.621394 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.622081 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.624108 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.625207 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.626405 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.627776 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vlq67"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.628017 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.629724 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.630812 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.632060 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.633256 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.634395 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.635447 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.636498 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmfw2"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.637568 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.638920 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.640270 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.641329 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m27qh"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.642436 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.643559 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd6qn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.644636 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hg4kn"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.644897 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1db2797b-53f3-4ccd-b212-9d5e3120820c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.644929 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca74d299-d21d-4169-adde-500339ec6876-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.644965 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tc2\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-kube-api-access-44tc2\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.644984 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26vzb\" (UniqueName: \"kubernetes.io/projected/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-kube-api-access-26vzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645019 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1db2797b-53f3-4ccd-b212-9d5e3120820c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645044 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645062 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645156 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca74d299-d21d-4169-adde-500339ec6876-config\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645223 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca74d299-d21d-4169-adde-500339ec6876-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.645647 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-t8jwg"] Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.646265 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.646428 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1db2797b-53f3-4ccd-b212-9d5e3120820c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.647809 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.648239 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1db2797b-53f3-4ccd-b212-9d5e3120820c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.648737 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.668007 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.688022 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.700312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca74d299-d21d-4169-adde-500339ec6876-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.710093 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.717648 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca74d299-d21d-4169-adde-500339ec6876-config\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.727772 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.748272 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.768477 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.789349 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.809296 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.828484 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.848389 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.868853 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.889567 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.908874 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.929305 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.949249 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:23:35 crc kubenswrapper[4955]: I1128 06:23:35.988414 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.009896 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.029157 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.049162 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.069403 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.088592 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.108969 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.129364 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.149575 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.168829 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.188663 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.209031 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.229037 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.248277 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.269618 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.289423 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.309323 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.328994 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.348393 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.369559 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.389417 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.408688 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.428708 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.449832 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.471856 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.489564 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.509312 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.528403 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.547594 4955 request.go:700] Waited for 1.008294074s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.550972 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.568317 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: E1128 06:23:36.575613 4955 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 28 06:23:36 crc kubenswrapper[4955]: E1128 06:23:36.575988 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert podName:0a49aa7e-6973-4a7b-9b1d-71922376ee73 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:37.075945963 +0000 UTC m=+139.665201573 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert") pod "controller-manager-879f6c89f-v94cg" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73") : failed to sync secret cache: timed out waiting for the condition Nov 28 06:23:36 crc kubenswrapper[4955]: E1128 06:23:36.577906 4955 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Nov 28 06:23:36 crc kubenswrapper[4955]: E1128 06:23:36.578015 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca podName:0a49aa7e-6973-4a7b-9b1d-71922376ee73 nodeName:}" failed. No retries permitted until 2025-11-28 06:23:37.07798965 +0000 UTC m=+139.667245250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca") pod "controller-manager-879f6c89f-v94cg" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73") : failed to sync configmap cache: timed out waiting for the condition Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.621009 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzgn4\" (UniqueName: \"kubernetes.io/projected/53dcbcb4-95a6-451e-b630-e2e067c6cd3d-kube-api-access-rzgn4\") pod \"openshift-apiserver-operator-796bbdcf4f-qzvzd\" (UID: \"53dcbcb4-95a6-451e-b630-e2e067c6cd3d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.629687 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.636468 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbvks\" (UniqueName: \"kubernetes.io/projected/d6fd184f-b649-4fb6-a1d6-24b158d3f9df-kube-api-access-hbvks\") pod \"console-operator-58897d9998-7vbtz\" (UID: \"d6fd184f-b649-4fb6-a1d6-24b158d3f9df\") " pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.648952 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.669715 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.689499 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.709777 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.729167 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.749149 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.769136 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.789674 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.809176 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.813974 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.823373 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.828477 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.880416 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.888906 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.890546 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.942161 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tld\" (UniqueName: \"kubernetes.io/projected/0d9779f3-5d4d-4a2c-a1c6-159ae32c360d-kube-api-access-f4tld\") pod \"cluster-samples-operator-665b6dd947-j5ss6\" (UID: \"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.957180 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlg5q\" (UniqueName: \"kubernetes.io/projected/37176422-d3bf-429f-af47-8dd4e135b40b-kube-api-access-hlg5q\") pod \"apiserver-7bbb656c7d-tq8t2\" (UID: \"37176422-d3bf-429f-af47-8dd4e135b40b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:36 crc kubenswrapper[4955]: I1128 06:23:36.962953 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmspc\" (UniqueName: \"kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.049918 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wbk2\" (UniqueName: \"kubernetes.io/projected/a7ea6110-fef8-49d3-9f79-8d6da21e8091-kube-api-access-5wbk2\") pod \"openshift-config-operator-7777fb866f-dh4bx\" (UID: \"a7ea6110-fef8-49d3-9f79-8d6da21e8091\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.067165 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.074460 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7x5\" (UniqueName: \"kubernetes.io/projected/ad19b4ea-ecc3-45ba-a946-171e6f2daa38-kube-api-access-mg7x5\") pod \"machine-approver-56656f9798-c95pw\" (UID: \"ad19b4ea-ecc3-45ba-a946-171e6f2daa38\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.082478 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcgfm\" (UniqueName: \"kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm\") pod \"route-controller-manager-6576b87f9c-v6cpm\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.087764 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b45bd\" (UniqueName: \"kubernetes.io/projected/44a739c5-de17-458b-ab79-74c4bd74a43b-kube-api-access-b45bd\") pod \"downloads-7954f5f757-8pl8k\" (UID: \"44a739c5-de17-458b-ab79-74c4bd74a43b\") " pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.090861 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvrw\" (UniqueName: \"kubernetes.io/projected/0d833f53-a5d1-47ea-ab5d-77bee61787fe-kube-api-access-xwvrw\") pod \"authentication-operator-69f744f599-k8wp6\" (UID: \"0d833f53-a5d1-47ea-ab5d-77bee61787fe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.096278 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9jh8\" (UniqueName: \"kubernetes.io/projected/4b20c134-37f2-42c2-be5f-d6f4a86d7b10-kube-api-access-p9jh8\") pod \"machine-api-operator-5694c8668f-4c6dx\" (UID: \"4b20c134-37f2-42c2-be5f-d6f4a86d7b10\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.103312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qt8g\" (UniqueName: \"kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g\") pod \"oauth-openshift-558db77b4-z7ncs\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.122798 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pvbp\" (UniqueName: \"kubernetes.io/projected/5b51a0dc-e121-4ba8-b0be-b01cf8553bfb-kube-api-access-4pvbp\") pod \"apiserver-76f77b778f-mg445\" (UID: \"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb\") " pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.128465 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.148400 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.151981 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.168454 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.168668 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.169018 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.174993 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.188706 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.190141 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.198231 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.208923 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.216680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.224310 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.228967 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.249100 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.269111 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.272783 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.278652 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.289143 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.305805 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.308040 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.329321 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.348891 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.368195 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.389099 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.412196 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.435052 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.450220 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.470683 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.480210 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" event={"ID":"ad19b4ea-ecc3-45ba-a946-171e6f2daa38","Type":"ContainerStarted","Data":"9db25bc7fe0e1fd7d8281ad524e8fd08c39612902237debcf8253bbb1bba6b7d"} Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.489031 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.532443 4955 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.550089 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.566939 4955 request.go:700] Waited for 1.945271774s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.568247 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.603588 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tc2\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-kube-api-access-44tc2\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.622142 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca74d299-d21d-4169-adde-500339ec6876-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vjlfm\" (UID: \"ca74d299-d21d-4169-adde-500339ec6876\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.643593 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1db2797b-53f3-4ccd-b212-9d5e3120820c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vbqfw\" (UID: \"1db2797b-53f3-4ccd-b212-9d5e3120820c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.662144 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26vzb\" (UniqueName: \"kubernetes.io/projected/c9a5dd12-fb17-4fab-b1f9-9a005cc2877a-kube-api-access-26vzb\") pod \"control-plane-machine-set-operator-78cbb6b69f-gn2ld\" (UID: \"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.675005 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.689227 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.711731 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.749814 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.756705 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.762209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.764717 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.768904 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.770886 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") pod \"controller-manager-879f6c89f-v94cg\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.770926 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789409 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789479 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789535 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789567 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-metrics-tls\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.789638 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67k2l\" (UniqueName: \"kubernetes.io/projected/74bc1065-1326-4674-8a9d-02b7b7fce98b-kube-api-access-67k2l\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790225 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2922\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-kube-api-access-q2922\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790262 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-serving-cert\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-config\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790381 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790405 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790426 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e4e8a345-0a04-4ad6-b7c7-7805823c4026-metrics-tls\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790454 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790495 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9sn\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790541 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790562 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790586 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790600 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-service-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790638 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790660 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kwd\" (UniqueName: \"kubernetes.io/projected/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-kube-api-access-z2kwd\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790690 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8s6x\" (UniqueName: \"kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790714 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790728 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790745 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790819 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e8a345-0a04-4ad6-b7c7-7805823c4026-trusted-ca\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790834 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-client\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790868 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.790884 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: E1128 06:23:37.791092 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.29107849 +0000 UTC m=+140.880334060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.837599 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2"] Nov 28 06:23:37 crc kubenswrapper[4955]: W1128 06:23:37.847153 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37176422_d3bf_429f_af47_8dd4e135b40b.slice/crio-34fd1f2a5b83baaedbb865d074bf8b8cecc38be51f80bb96998f8cb3d4918018 WatchSource:0}: Error finding container 34fd1f2a5b83baaedbb865d074bf8b8cecc38be51f80bb96998f8cb3d4918018: Status 404 returned error can't find the container with id 34fd1f2a5b83baaedbb865d074bf8b8cecc38be51f80bb96998f8cb3d4918018 Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.891930 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892116 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkpn\" (UniqueName: \"kubernetes.io/projected/ce7cab06-019f-4d6b-82be-e9cacdbddb06-kube-api-access-rtkpn\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:37 crc kubenswrapper[4955]: E1128 06:23:37.892154 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.392130296 +0000 UTC m=+140.981385866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892265 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2922\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-kube-api-access-q2922\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-csi-data-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892364 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892378 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e4e8a345-0a04-4ad6-b7c7-7805823c4026-metrics-tls\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892425 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892442 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10eb260e-b06b-4c05-bd29-5cae90517573-tmpfs\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892469 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54ngx\" (UniqueName: \"kubernetes.io/projected/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-kube-api-access-54ngx\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892519 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892539 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892554 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-service-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892581 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-metrics-tls\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892601 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-node-bootstrap-token\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892616 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7kv4\" (UniqueName: \"kubernetes.io/projected/a148b60f-cd30-40a2-938d-133d328901a3-kube-api-access-q7kv4\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892631 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a8afa31-175a-4149-aa75-6b68fba36433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892645 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-srv-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892738 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkn6d\" (UniqueName: \"kubernetes.io/projected/aef4e7b4-fffd-4637-9ce9-22264299ad8b-kube-api-access-hkn6d\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892760 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-client\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892807 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892823 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef4e7b4-fffd-4637-9ce9-22264299ad8b-proxy-tls\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.892844 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpnrn\" (UniqueName: \"kubernetes.io/projected/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-kube-api-access-zpnrn\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.893837 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.895016 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:37 crc kubenswrapper[4955]: E1128 06:23:37.895043 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.395026317 +0000 UTC m=+140.984281887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.895073 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.895118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.895143 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-default-certificate\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896146 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896811 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gj6\" (UniqueName: \"kubernetes.io/projected/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-kube-api-access-l2gj6\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896853 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-certs\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896870 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896889 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf959\" (UniqueName: \"kubernetes.io/projected/10eb260e-b06b-4c05-bd29-5cae90517573-kube-api-access-vf959\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896907 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e298e78-6a12-4148-aba0-25829ecf409c-service-ca-bundle\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896964 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.896990 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-service-ca\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897366 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-socket-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897395 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897413 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-config-volume\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897432 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w57bs\" (UniqueName: \"kubernetes.io/projected/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-kube-api-access-w57bs\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897447 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-srv-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897462 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae62707c-beac-4d1f-96e0-fb7d9094a58f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897488 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-images\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897519 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmqk4\" (UniqueName: \"kubernetes.io/projected/ae62707c-beac-4d1f-96e0-fb7d9094a58f-kube-api-access-bmqk4\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897538 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7cab06-019f-4d6b-82be-e9cacdbddb06-cert\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8twz7\" (UniqueName: \"kubernetes.io/projected/42eecb9c-abf9-4306-92ee-6cbb96e76068-kube-api-access-8twz7\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897602 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897617 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjrlf\" (UniqueName: \"kubernetes.io/projected/15e25bb5-0316-4fe0-87de-de00d7c74741-kube-api-access-tjrlf\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897631 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897668 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67k2l\" (UniqueName: \"kubernetes.io/projected/74bc1065-1326-4674-8a9d-02b7b7fce98b-kube-api-access-67k2l\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897696 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-serving-cert\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897711 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-config\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897714 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897739 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwm8\" (UniqueName: \"kubernetes.io/projected/7a8afa31-175a-4149-aa75-6b68fba36433-kube-api-access-dmwm8\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897756 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.897781 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-registration-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.898540 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-stats-auth\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.898571 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-metrics-certs\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.898627 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.898983 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-etcd-client\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899123 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899173 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9sn\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899195 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-mountpoint-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899285 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz242\" (UniqueName: \"kubernetes.io/projected/8e298e78-6a12-4148-aba0-25829ecf409c-kube-api-access-qz242\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.899350 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148b60f-cd30-40a2-938d-133d328901a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900184 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900198 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e25bb5-0316-4fe0-87de-de00d7c74741-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900226 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jrx\" (UniqueName: \"kubernetes.io/projected/41149789-c52d-44af-9616-92969e6d37c2-kube-api-access-f7jrx\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900277 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900325 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kwd\" (UniqueName: \"kubernetes.io/projected/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-kube-api-access-z2kwd\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900680 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8s6x\" (UniqueName: \"kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.900956 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-cabundle\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901196 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901317 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901349 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd42l\" (UniqueName: \"kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901649 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e8a345-0a04-4ad6-b7c7-7805823c4026-trusted-ca\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901718 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcw9\" (UniqueName: \"kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901918 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74bc1065-1326-4674-8a9d-02b7b7fce98b-serving-cert\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901967 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2fb484-2a0d-4283-bdb2-4b7915541845-proxy-tls\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.901986 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllpn\" (UniqueName: \"kubernetes.io/projected/df2fb484-2a0d-4283-bdb2-4b7915541845-kube-api-access-jllpn\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.902023 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-plugins-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.902870 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41149789-c52d-44af-9616-92969e6d37c2-config\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.902919 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.902954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903523 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903803 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903880 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae62707c-beac-4d1f-96e0-fb7d9094a58f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903909 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-key\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903929 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.903965 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904013 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e4e8a345-0a04-4ad6-b7c7-7805823c4026-trusted-ca\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904139 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-webhook-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904201 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904271 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904306 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41149789-c52d-44af-9616-92969e6d37c2-serving-cert\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904540 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df2fb484-2a0d-4283-bdb2-4b7915541845-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8qg\" (UniqueName: \"kubernetes.io/projected/3d633e7c-2b49-443a-9f5c-f5b8d475e399-kube-api-access-ts8qg\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904834 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbjt\" (UniqueName: \"kubernetes.io/projected/fa934dd3-d47c-454c-80fb-f40124c61e2d-kube-api-access-vzbjt\") pod \"migrator-59844c95c7-j4mts\" (UID: \"fa934dd3-d47c-454c-80fb-f40124c61e2d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904862 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-config\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.904922 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-metrics-tls\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.905025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.905265 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148b60f-cd30-40a2-938d-133d328901a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.905296 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.905674 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.906206 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.906234 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.906424 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74bc1065-1326-4674-8a9d-02b7b7fce98b-config\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.907942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e4e8a345-0a04-4ad6-b7c7-7805823c4026-metrics-tls\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.908881 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.909299 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-metrics-tls\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.924366 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2922\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-kube-api-access-q2922\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.945678 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7vbtz"] Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.946042 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd"] Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.954405 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mg445"] Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.962324 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7008cba8-7ab6-4b6d-9340-c7ac6157d59a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-bqk5b\" (UID: \"7008cba8-7ab6-4b6d-9340-c7ac6157d59a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.963117 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4c6dx"] Nov 28 06:23:37 crc kubenswrapper[4955]: W1128 06:23:37.970168 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53dcbcb4_95a6_451e_b630_e2e067c6cd3d.slice/crio-6318e190cc3bd8b14f1ca29e0e4bb919883855f788629d23480a3b1bd3f1d753 WatchSource:0}: Error finding container 6318e190cc3bd8b14f1ca29e0e4bb919883855f788629d23480a3b1bd3f1d753: Status 404 returned error can't find the container with id 6318e190cc3bd8b14f1ca29e0e4bb919883855f788629d23480a3b1bd3f1d753 Nov 28 06:23:37 crc kubenswrapper[4955]: I1128 06:23:37.983780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67k2l\" (UniqueName: \"kubernetes.io/projected/74bc1065-1326-4674-8a9d-02b7b7fce98b-kube-api-access-67k2l\") pod \"etcd-operator-b45778765-qhfdt\" (UID: \"74bc1065-1326-4674-8a9d-02b7b7fce98b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.002572 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9sn\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006109 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.006252 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.506218146 +0000 UTC m=+141.095473716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006279 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-default-certificate\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006302 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gj6\" (UniqueName: \"kubernetes.io/projected/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-kube-api-access-l2gj6\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006319 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-certs\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006352 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf959\" (UniqueName: \"kubernetes.io/projected/10eb260e-b06b-4c05-bd29-5cae90517573-kube-api-access-vf959\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006367 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e298e78-6a12-4148-aba0-25829ecf409c-service-ca-bundle\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006382 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-socket-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006397 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-config-volume\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w57bs\" (UniqueName: \"kubernetes.io/projected/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-kube-api-access-w57bs\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006429 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-srv-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006446 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae62707c-beac-4d1f-96e0-fb7d9094a58f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006464 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-images\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006479 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmqk4\" (UniqueName: \"kubernetes.io/projected/ae62707c-beac-4d1f-96e0-fb7d9094a58f-kube-api-access-bmqk4\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7cab06-019f-4d6b-82be-e9cacdbddb06-cert\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006521 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8twz7\" (UniqueName: \"kubernetes.io/projected/42eecb9c-abf9-4306-92ee-6cbb96e76068-kube-api-access-8twz7\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006540 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006556 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjrlf\" (UniqueName: \"kubernetes.io/projected/15e25bb5-0316-4fe0-87de-de00d7c74741-kube-api-access-tjrlf\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006582 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006635 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwm8\" (UniqueName: \"kubernetes.io/projected/7a8afa31-175a-4149-aa75-6b68fba36433-kube-api-access-dmwm8\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006650 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-metrics-certs\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006668 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-registration-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006685 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-stats-auth\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006713 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006728 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-mountpoint-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz242\" (UniqueName: \"kubernetes.io/projected/8e298e78-6a12-4148-aba0-25829ecf409c-kube-api-access-qz242\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006764 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148b60f-cd30-40a2-938d-133d328901a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e25bb5-0316-4fe0-87de-de00d7c74741-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006797 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jrx\" (UniqueName: \"kubernetes.io/projected/41149789-c52d-44af-9616-92969e6d37c2-kube-api-access-f7jrx\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006824 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-cabundle\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006841 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd42l\" (UniqueName: \"kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006856 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcw9\" (UniqueName: \"kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006870 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2fb484-2a0d-4283-bdb2-4b7915541845-proxy-tls\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllpn\" (UniqueName: \"kubernetes.io/projected/df2fb484-2a0d-4283-bdb2-4b7915541845-kube-api-access-jllpn\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006915 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-plugins-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006929 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41149789-c52d-44af-9616-92969e6d37c2-config\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae62707c-beac-4d1f-96e0-fb7d9094a58f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.006988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-key\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007034 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-webhook-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007071 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007086 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007100 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41149789-c52d-44af-9616-92969e6d37c2-serving-cert\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df2fb484-2a0d-4283-bdb2-4b7915541845-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007155 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007175 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts8qg\" (UniqueName: \"kubernetes.io/projected/3d633e7c-2b49-443a-9f5c-f5b8d475e399-kube-api-access-ts8qg\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007191 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzbjt\" (UniqueName: \"kubernetes.io/projected/fa934dd3-d47c-454c-80fb-f40124c61e2d-kube-api-access-vzbjt\") pod \"migrator-59844c95c7-j4mts\" (UID: \"fa934dd3-d47c-454c-80fb-f40124c61e2d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-config\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007251 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148b60f-cd30-40a2-938d-133d328901a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkpn\" (UniqueName: \"kubernetes.io/projected/ce7cab06-019f-4d6b-82be-e9cacdbddb06-kube-api-access-rtkpn\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007281 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-csi-data-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007328 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007344 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10eb260e-b06b-4c05-bd29-5cae90517573-tmpfs\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007998 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54ngx\" (UniqueName: \"kubernetes.io/projected/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-kube-api-access-54ngx\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008043 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-metrics-tls\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008060 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-node-bootstrap-token\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008078 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7kv4\" (UniqueName: \"kubernetes.io/projected/a148b60f-cd30-40a2-938d-133d328901a3-kube-api-access-q7kv4\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a8afa31-175a-4149-aa75-6b68fba36433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008111 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-srv-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008128 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkn6d\" (UniqueName: \"kubernetes.io/projected/aef4e7b4-fffd-4637-9ce9-22264299ad8b-kube-api-access-hkn6d\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008133 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae62707c-beac-4d1f-96e0-fb7d9094a58f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008147 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef4e7b4-fffd-4637-9ce9-22264299ad8b-proxy-tls\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008307 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpnrn\" (UniqueName: \"kubernetes.io/projected/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-kube-api-access-zpnrn\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.008343 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.009306 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-config\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.009308 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-registration-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.009388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-config-volume\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.010363 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e298e78-6a12-4148-aba0-25829ecf409c-service-ca-bundle\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.010428 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-socket-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.010687 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.011544 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-images\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.011985 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-csi-data-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.007281 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-mountpoint-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.012333 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.512316387 +0000 UTC m=+141.101572047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.012942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a148b60f-cd30-40a2-938d-133d328901a3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.013020 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/42eecb9c-abf9-4306-92ee-6cbb96e76068-plugins-dir\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.013451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a148b60f-cd30-40a2-938d-133d328901a3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.016277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ce7cab06-019f-4d6b-82be-e9cacdbddb06-cert\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.016811 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/df2fb484-2a0d-4283-bdb2-4b7915541845-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.016963 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.017279 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-metrics-certs\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.017370 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-default-certificate\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.017720 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-certs\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.018336 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/15e25bb5-0316-4fe0-87de-de00d7c74741-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.019295 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41149789-c52d-44af-9616-92969e6d37c2-serving-cert\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.019524 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-metrics-tls\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.021642 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/df2fb484-2a0d-4283-bdb2-4b7915541845-proxy-tls\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.022005 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.022306 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-node-bootstrap-token\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.023716 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.023882 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aef4e7b4-fffd-4637-9ce9-22264299ad8b-proxy-tls\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.023930 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-srv-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.024733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-key\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.024794 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7a8afa31-175a-4149-aa75-6b68fba36433-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.025983 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.027259 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae62707c-beac-4d1f-96e0-fb7d9094a58f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.028513 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.029059 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.033042 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/10eb260e-b06b-4c05-bd29-5cae90517573-tmpfs\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.033446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41149789-c52d-44af-9616-92969e6d37c2-config\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.033628 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-apiservice-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.034541 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8e298e78-6a12-4148-aba0-25829ecf409c-stats-auth\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.036208 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.038029 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d633e7c-2b49-443a-9f5c-f5b8d475e399-srv-cert\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.041339 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aef4e7b4-fffd-4637-9ce9-22264299ad8b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.042125 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-signing-cabundle\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.042363 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8s6x\" (UniqueName: \"kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x\") pod \"console-f9d7485db-sxskz\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.043258 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-profile-collector-cert\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.049751 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/10eb260e-b06b-4c05-bd29-5cae90517573-webhook-cert\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.063522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kwd\" (UniqueName: \"kubernetes.io/projected/0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1-kube-api-access-z2kwd\") pod \"dns-operator-744455d44c-qj5n6\" (UID: \"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.071046 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.072407 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.075094 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.076040 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4e8a345-0a04-4ad6-b7c7-7805823c4026-bound-sa-token\") pod \"ingress-operator-5b745b69d9-sslmp\" (UID: \"e4e8a345-0a04-4ad6-b7c7-7805823c4026\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.078664 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.079323 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.087743 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.088937 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.101337 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.108953 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.109213 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.609187584 +0000 UTC m=+141.198443154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.109640 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.110350 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.610337387 +0000 UTC m=+141.199592957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: W1128 06:23:38.124719 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1db2797b_53f3_4ccd_b212_9d5e3120820c.slice/crio-c06efeffd69f489322ebe19525b4e284be8091217b57583df9e877a0185c8bb8 WatchSource:0}: Error finding container c06efeffd69f489322ebe19525b4e284be8091217b57583df9e877a0185c8bb8: Status 404 returned error can't find the container with id c06efeffd69f489322ebe19525b4e284be8091217b57583df9e877a0185c8bb8 Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.127756 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gj6\" (UniqueName: \"kubernetes.io/projected/2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8-kube-api-access-l2gj6\") pod \"dns-default-vlq67\" (UID: \"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8\") " pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.162733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts8qg\" (UniqueName: \"kubernetes.io/projected/3d633e7c-2b49-443a-9f5c-f5b8d475e399-kube-api-access-ts8qg\") pod \"olm-operator-6b444d44fb-r54d2\" (UID: \"3d633e7c-2b49-443a-9f5c-f5b8d475e399\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.175644 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.177655 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-8pl8k"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.179542 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-k8wp6"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.180351 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzbjt\" (UniqueName: \"kubernetes.io/projected/fa934dd3-d47c-454c-80fb-f40124c61e2d-kube-api-access-vzbjt\") pod \"migrator-59844c95c7-j4mts\" (UID: \"fa934dd3-d47c-454c-80fb-f40124c61e2d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.185435 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf959\" (UniqueName: \"kubernetes.io/projected/10eb260e-b06b-4c05-bd29-5cae90517573-kube-api-access-vf959\") pod \"packageserver-d55dfcdfc-ctrqm\" (UID: \"10eb260e-b06b-4c05-bd29-5cae90517573\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.210735 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.211414 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.711397112 +0000 UTC m=+141.300652682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.214889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwm8\" (UniqueName: \"kubernetes.io/projected/7a8afa31-175a-4149-aa75-6b68fba36433-kube-api-access-dmwm8\") pod \"multus-admission-controller-857f4d67dd-m27qh\" (UID: \"7a8afa31-175a-4149-aa75-6b68fba36433\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.228878 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkpn\" (UniqueName: \"kubernetes.io/projected/ce7cab06-019f-4d6b-82be-e9cacdbddb06-kube-api-access-rtkpn\") pod \"ingress-canary-kmfw2\" (UID: \"ce7cab06-019f-4d6b-82be-e9cacdbddb06\") " pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.234836 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.242272 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.260410 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8twz7\" (UniqueName: \"kubernetes.io/projected/42eecb9c-abf9-4306-92ee-6cbb96e76068-kube-api-access-8twz7\") pod \"csi-hostpathplugin-bd6qn\" (UID: \"42eecb9c-abf9-4306-92ee-6cbb96e76068\") " pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.267348 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w57bs\" (UniqueName: \"kubernetes.io/projected/e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8-kube-api-access-w57bs\") pod \"service-ca-9c57cc56f-hg4kn\" (UID: \"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8\") " pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.290360 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmqk4\" (UniqueName: \"kubernetes.io/projected/ae62707c-beac-4d1f-96e0-fb7d9094a58f-kube-api-access-bmqk4\") pod \"kube-storage-version-migrator-operator-b67b599dd-2jjx8\" (UID: \"ae62707c-beac-4d1f-96e0-fb7d9094a58f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: W1128 06:23:38.298981 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a5dd12_fb17_4fab_b1f9_9a005cc2877a.slice/crio-3a93d181e6bfe85e06d4a6d1e0e0ceabbbc783096ec4e8999f45fbc3c4662e92 WatchSource:0}: Error finding container 3a93d181e6bfe85e06d4a6d1e0e0ceabbbc783096ec4e8999f45fbc3c4662e92: Status 404 returned error can't find the container with id 3a93d181e6bfe85e06d4a6d1e0e0ceabbbc783096ec4e8999f45fbc3c4662e92 Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.308392 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjrlf\" (UniqueName: \"kubernetes.io/projected/15e25bb5-0316-4fe0-87de-de00d7c74741-kube-api-access-tjrlf\") pod \"package-server-manager-789f6589d5-r6gpz\" (UID: \"15e25bb5-0316-4fe0-87de-de00d7c74741\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.308884 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qhfdt"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.313633 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.313962 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.81395023 +0000 UTC m=+141.403205800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.324933 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bfde895e-d3ab-4d4c-a5ce-e309f52a7f52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghfl6\" (UID: \"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.329664 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.335825 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.343381 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.347200 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcw9\" (UniqueName: \"kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9\") pod \"collect-profiles-29405175-fppsm\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.364010 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz242\" (UniqueName: \"kubernetes.io/projected/8e298e78-6a12-4148-aba0-25829ecf409c-kube-api-access-qz242\") pod \"router-default-5444994796-pw7x8\" (UID: \"8e298e78-6a12-4148-aba0-25829ecf409c\") " pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.389288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllpn\" (UniqueName: \"kubernetes.io/projected/df2fb484-2a0d-4283-bdb2-4b7915541845-kube-api-access-jllpn\") pod \"machine-config-controller-84d6567774-466vn\" (UID: \"df2fb484-2a0d-4283-bdb2-4b7915541845\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.394255 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.401783 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.406300 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54ngx\" (UniqueName: \"kubernetes.io/projected/0e0af4cf-4fb0-4244-a3f1-e31d3028710a-kube-api-access-54ngx\") pod \"catalog-operator-68c6474976-fjld9\" (UID: \"0e0af4cf-4fb0-4244-a3f1-e31d3028710a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.412616 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.414841 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.415373 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:38.915346525 +0000 UTC m=+141.504602095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.428362 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpnrn\" (UniqueName: \"kubernetes.io/projected/0d337492-faa7-4ee3-beb5-c87ba5b0dc93-kube-api-access-zpnrn\") pod \"machine-config-server-t8jwg\" (UID: \"0d337492-faa7-4ee3-beb5-c87ba5b0dc93\") " pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.428622 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.438375 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.445745 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.446004 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd42l\" (UniqueName: \"kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l\") pod \"marketplace-operator-79b997595-bf8xd\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.458735 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.462093 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.466892 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.473039 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jrx\" (UniqueName: \"kubernetes.io/projected/41149789-c52d-44af-9616-92969e6d37c2-kube-api-access-f7jrx\") pod \"service-ca-operator-777779d784-zdfh5\" (UID: \"41149789-c52d-44af-9616-92969e6d37c2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.483669 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.487125 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkn6d\" (UniqueName: \"kubernetes.io/projected/aef4e7b4-fffd-4637-9ce9-22264299ad8b-kube-api-access-hkn6d\") pod \"machine-config-operator-74547568cd-pj5dg\" (UID: \"aef4e7b4-fffd-4637-9ce9-22264299ad8b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.490703 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.504867 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.504909 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.509710 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7kv4\" (UniqueName: \"kubernetes.io/projected/a148b60f-cd30-40a2-938d-133d328901a3-kube-api-access-q7kv4\") pod \"openshift-controller-manager-operator-756b6f6bc6-7ff5w\" (UID: \"a148b60f-cd30-40a2-938d-133d328901a3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.514408 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.515857 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.516692 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.016676149 +0000 UTC m=+141.605931719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.517060 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.520324 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.524186 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" event={"ID":"0d833f53-a5d1-47ea-ab5d-77bee61787fe","Type":"ContainerStarted","Data":"307dc0453e2ba504b4b0e1b4c7f04cc38a5e63e6637360aee3bfbcf565b6ef8a"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.528656 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kmfw2" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.539843 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" event={"ID":"ad19b4ea-ecc3-45ba-a946-171e6f2daa38","Type":"ContainerStarted","Data":"0e2251d1cc08cd98a8d53b6bf84720cbbd640c1ba949970c8de34b5f985dee25"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.539903 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" event={"ID":"ad19b4ea-ecc3-45ba-a946-171e6f2daa38","Type":"ContainerStarted","Data":"e01d6829ccc164854e17b283d9374a3b987919350ba60ac8d6f015a368f04851"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.546072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" event={"ID":"74bc1065-1326-4674-8a9d-02b7b7fce98b","Type":"ContainerStarted","Data":"549ccd51125ef7721d6beb4c75072bfcd150db055a0eebd7a20733a849633675"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.549513 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" event={"ID":"a7ea6110-fef8-49d3-9f79-8d6da21e8091","Type":"ContainerStarted","Data":"f5db7e839d95d71c1f130c36fb7017b1010a7e224f26dda408ca473779138e75"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.549576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" event={"ID":"a7ea6110-fef8-49d3-9f79-8d6da21e8091","Type":"ContainerStarted","Data":"1b80eeece41984bc0310f0dd3ea384fc8efbe3f9fb64de447aa4ecdb81740607"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.553313 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.560034 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" event={"ID":"53dcbcb4-95a6-451e-b630-e2e067c6cd3d","Type":"ContainerStarted","Data":"c4eca0b9e4e1047d62b18f647d320bb1eff6f48868ea282b80e409b9f298196f"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.560086 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" event={"ID":"53dcbcb4-95a6-451e-b630-e2e067c6cd3d","Type":"ContainerStarted","Data":"6318e190cc3bd8b14f1ca29e0e4bb919883855f788629d23480a3b1bd3f1d753"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.560721 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-t8jwg" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.568105 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" event={"ID":"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a","Type":"ContainerStarted","Data":"3a93d181e6bfe85e06d4a6d1e0e0ceabbbc783096ec4e8999f45fbc3c4662e92"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.570782 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" event={"ID":"69319eb6-a378-4a28-a980-282c075c1c78","Type":"ContainerStarted","Data":"38c79c74f0744aa9e5023ff7d3740878ab8b9cbe2ebc3ed49d9fa66937a5cd54"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.578325 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" event={"ID":"e5a1023e-2f70-4592-b507-8a198260ed35","Type":"ContainerStarted","Data":"095c6ced1aa7ee1b776c974f47908421c2f2558af0141bd410e20383753aeef1"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.601691 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" event={"ID":"d6fd184f-b649-4fb6-a1d6-24b158d3f9df","Type":"ContainerStarted","Data":"3f4e99a72b71c5069699cb01dd57739ac5ab77563ca36f8f2a902e3b8eaff830"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.601748 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" event={"ID":"d6fd184f-b649-4fb6-a1d6-24b158d3f9df","Type":"ContainerStarted","Data":"2252bebb587c72f074a12f430d0c99d482ef073c53928360e83f731fa64f69a8"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.603478 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.617199 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.619170 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.119150284 +0000 UTC m=+141.708405854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.630709 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.634886 4955 generic.go:334] "Generic (PLEG): container finished" podID="37176422-d3bf-429f-af47-8dd4e135b40b" containerID="16d8886b3d28d20b9b9779edb977356370675df14fb3a2856858b9c81f921db4" exitCode=0 Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.634975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" event={"ID":"37176422-d3bf-429f-af47-8dd4e135b40b","Type":"ContainerDied","Data":"16d8886b3d28d20b9b9779edb977356370675df14fb3a2856858b9c81f921db4"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.635004 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" event={"ID":"37176422-d3bf-429f-af47-8dd4e135b40b","Type":"ContainerStarted","Data":"34fd1f2a5b83baaedbb865d074bf8b8cecc38be51f80bb96998f8cb3d4918018"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.647973 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" event={"ID":"4b20c134-37f2-42c2-be5f-d6f4a86d7b10","Type":"ContainerStarted","Data":"c0912cbd706498d1ece83957a6b2c22aaae99b63a8268a6ecc21fb24b6ccc60e"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.648012 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" event={"ID":"4b20c134-37f2-42c2-be5f-d6f4a86d7b10","Type":"ContainerStarted","Data":"82a0ad5e38d0b6f004f8677cfceb0297e1d782d09dc5f4bf87c0f1b0030cf4ea"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.650993 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" event={"ID":"ca74d299-d21d-4169-adde-500339ec6876","Type":"ContainerStarted","Data":"c0497fc0f74c96f166283640e88f9fe1c02e576a79173e3b6a9ba47b82465777"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.653455 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8pl8k" event={"ID":"44a739c5-de17-458b-ab79-74c4bd74a43b","Type":"ContainerStarted","Data":"e353d5e96dc10ba89017c167a25b169c2cf07a52585a0db6321ffd8586f3c156"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.659378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mg445" event={"ID":"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb","Type":"ContainerStarted","Data":"02cfc69364a0293ca64bf3c46dee4b3fce22646a2d7be106ebf35380feb99a57"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.665073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" event={"ID":"1db2797b-53f3-4ccd-b212-9d5e3120820c","Type":"ContainerStarted","Data":"c06efeffd69f489322ebe19525b4e284be8091217b57583df9e877a0185c8bb8"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.667654 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" event={"ID":"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d","Type":"ContainerStarted","Data":"ffbd7e298861a2fa530547c58fe5857009f8cb3ae049174b4948e4c56e95be6a"} Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.719761 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.724280 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.224262773 +0000 UTC m=+141.813518343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.752732 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.775302 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-vlq67"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.827957 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.828745 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.328729214 +0000 UTC m=+141.917984784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.896271 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qj5n6"] Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.912102 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" Nov 28 06:23:38 crc kubenswrapper[4955]: I1128 06:23:38.938401 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:38 crc kubenswrapper[4955]: E1128 06:23:38.938862 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.438846094 +0000 UTC m=+142.028101664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.041307 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.041834 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.541813633 +0000 UTC m=+142.131069203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.144539 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.145245 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.645227434 +0000 UTC m=+142.234483084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.245739 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.245973 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.74591859 +0000 UTC m=+142.335174150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.246356 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.246651 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.74663613 +0000 UTC m=+142.335891700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.261185 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-c95pw" podStartSLOduration=123.261169186 podStartE2EDuration="2m3.261169186s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:39.260714094 +0000 UTC m=+141.849969664" watchObservedRunningTime="2025-11-28 06:23:39.261169186 +0000 UTC m=+141.850424756" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.275771 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-466vn"] Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.349423 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.350095 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.850081403 +0000 UTC m=+142.439336973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.450820 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.451112 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:39.951100397 +0000 UTC m=+142.540355967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.557210 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.557857 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.057840692 +0000 UTC m=+142.647096262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.587201 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp"] Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.663307 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.663850 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.163837606 +0000 UTC m=+142.753093176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.765289 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.765693 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.265679853 +0000 UTC m=+142.854935423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805428 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pw7x8" event={"ID":"8e298e78-6a12-4148-aba0-25829ecf409c","Type":"ContainerStarted","Data":"a5d5c5fa9a8f1179affde2a9bc9d971afbbbb904beb5067e486d5f276a3e127c"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805465 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805476 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" event={"ID":"7008cba8-7ab6-4b6d-9340-c7ac6157d59a","Type":"ContainerStarted","Data":"edcdc5c591bab158519ac7d90d804e96b7185b8064e05d367c0c0ff9abd4fd8b"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805486 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" event={"ID":"4b20c134-37f2-42c2-be5f-d6f4a86d7b10","Type":"ContainerStarted","Data":"02f63a6b70abf534a79cc1437cd38b475a01fc40e2ddeeb7d2fa9cfbc8b4c0cb"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805498 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" event={"ID":"74bc1065-1326-4674-8a9d-02b7b7fce98b","Type":"ContainerStarted","Data":"55471b003a7b7e8493c6c889af195907f2d2e345c83476821973c2859d0586f2"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805752 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlq67" event={"ID":"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8","Type":"ContainerStarted","Data":"84b6d4161ee87152aecd1f4880324a8ebfd79e6e14ce50c23a30afb11b6f1e02"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" event={"ID":"0a49aa7e-6973-4a7b-9b1d-71922376ee73","Type":"ContainerStarted","Data":"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805771 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" event={"ID":"0a49aa7e-6973-4a7b-9b1d-71922376ee73","Type":"ContainerStarted","Data":"3237a811b8a34b9842544b4c8ca5100e959d16d4c858bdaac8ffe429ac9065d2"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.805781 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" event={"ID":"c9a5dd12-fb17-4fab-b1f9-9a005cc2877a","Type":"ContainerStarted","Data":"c460e7c4938b2d031a269a8777ced78524d5f6072c44764740ff2e111d851af8"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.806081 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.807386 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" event={"ID":"69319eb6-a378-4a28-a980-282c075c1c78","Type":"ContainerStarted","Data":"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.825687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" event={"ID":"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1","Type":"ContainerStarted","Data":"b07bf5aaf82e003942aab953f6b6dcfab1095bc866b1ec5d550c15d8b08cbf19"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.839305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-8pl8k" event={"ID":"44a739c5-de17-458b-ab79-74c4bd74a43b","Type":"ContainerStarted","Data":"f87728b31820027c1c87b8466fcfa3d3c2250351dab906c37ccb1a48e5c3864a"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.839835 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.860607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" event={"ID":"1db2797b-53f3-4ccd-b212-9d5e3120820c","Type":"ContainerStarted","Data":"c71d29ab27556eb24ef15eaa75a73812a39d8b162ae6c232ca55476413c61ee1"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.866459 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.867629 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.367617554 +0000 UTC m=+142.956873124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.871472 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" event={"ID":"ca74d299-d21d-4169-adde-500339ec6876","Type":"ContainerStarted","Data":"dbdd7558db2941e3e631a3ec18cb93b3ffb1cfec4fd4d3edab4d5faac204edff"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.885471 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" event={"ID":"0d833f53-a5d1-47ea-ab5d-77bee61787fe","Type":"ContainerStarted","Data":"a41334a2c78ac2ec20526e57e3a0d5cff7ea9e7393132ed4a17f8ce13dff8598"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.887677 4955 generic.go:334] "Generic (PLEG): container finished" podID="5b51a0dc-e121-4ba8-b0be-b01cf8553bfb" containerID="b8b6472f5501a09b2f039b8ce887315e0f6651f7542c9c6b4afc30929c18a4df" exitCode=0 Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.887710 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mg445" event={"ID":"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb","Type":"ContainerDied","Data":"b8b6472f5501a09b2f039b8ce887315e0f6651f7542c9c6b4afc30929c18a4df"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.892152 4955 generic.go:334] "Generic (PLEG): container finished" podID="a7ea6110-fef8-49d3-9f79-8d6da21e8091" containerID="f5db7e839d95d71c1f130c36fb7017b1010a7e224f26dda408ca473779138e75" exitCode=0 Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.892202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" event={"ID":"a7ea6110-fef8-49d3-9f79-8d6da21e8091","Type":"ContainerDied","Data":"f5db7e839d95d71c1f130c36fb7017b1010a7e224f26dda408ca473779138e75"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.892220 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" event={"ID":"a7ea6110-fef8-49d3-9f79-8d6da21e8091","Type":"ContainerStarted","Data":"03410ab5fccc9f9e7c600d4f23ad6173a566fe6b7678892288376fd287787c23"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.892679 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.904396 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" event={"ID":"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d","Type":"ContainerStarted","Data":"ca8414be798803c452f380b5d57d14841f3b1771e88340d673a80015fd819b5e"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.912588 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" event={"ID":"df2fb484-2a0d-4283-bdb2-4b7915541845","Type":"ContainerStarted","Data":"af48509aab8053c8a810f0827acab2c1c9d6ddd3fd8074a1ad58d1c5ae1e0411"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.924986 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" event={"ID":"e5a1023e-2f70-4592-b507-8a198260ed35","Type":"ContainerStarted","Data":"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.925462 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.927111 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t8jwg" event={"ID":"0d337492-faa7-4ee3-beb5-c87ba5b0dc93","Type":"ContainerStarted","Data":"0d946c03d329cc0fad1f36b5811409b2fc7eb39fb7075d68e9fc9cd43404513a"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.929083 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" event={"ID":"3d633e7c-2b49-443a-9f5c-f5b8d475e399","Type":"ContainerStarted","Data":"8269b46ad563ead243ade61ae890a477a9b21e90899c0c7d49f4923a829e6751"} Nov 28 06:23:39 crc kubenswrapper[4955]: I1128 06:23:39.967859 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:39 crc kubenswrapper[4955]: E1128 06:23:39.969494 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.469477342 +0000 UTC m=+143.058732912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.023313 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qzvzd" podStartSLOduration=124.023298167 podStartE2EDuration="2m4.023298167s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:39.981648993 +0000 UTC m=+142.570904583" watchObservedRunningTime="2025-11-28 06:23:40.023298167 +0000 UTC m=+142.612553737" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.074404 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.076982 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.576969178 +0000 UTC m=+143.166224738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.081169 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-8pl8k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.081297 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8pl8k" podUID="44a739c5-de17-458b-ab79-74c4bd74a43b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.093523 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.095869 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.147949 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7vbtz" podStartSLOduration=123.147934002 podStartE2EDuration="2m3.147934002s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.139955399 +0000 UTC m=+142.729210969" watchObservedRunningTime="2025-11-28 06:23:40.147934002 +0000 UTC m=+142.737189572" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.176216 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.176683 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.676647916 +0000 UTC m=+143.265903486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.235258 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qhfdt" podStartSLOduration=123.235238044 podStartE2EDuration="2m3.235238044s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.234830263 +0000 UTC m=+142.824085833" watchObservedRunningTime="2025-11-28 06:23:40.235238044 +0000 UTC m=+142.824493614" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.270887 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.279458 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.279802 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.77979178 +0000 UTC m=+143.369047350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.287272 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.305694 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.305756 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.306008 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-8pl8k" podStartSLOduration=123.305989913 podStartE2EDuration="2m3.305989913s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.300841599 +0000 UTC m=+142.890097169" watchObservedRunningTime="2025-11-28 06:23:40.305989913 +0000 UTC m=+142.895245503" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.351966 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-k8wp6" podStartSLOduration=124.351951018 podStartE2EDuration="2m4.351951018s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.350404305 +0000 UTC m=+142.939659885" watchObservedRunningTime="2025-11-28 06:23:40.351951018 +0000 UTC m=+142.941206588" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.384057 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.384532 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.884493408 +0000 UTC m=+143.473748978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.392173 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vbqfw" podStartSLOduration=123.392150952 podStartE2EDuration="2m3.392150952s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.390114075 +0000 UTC m=+142.979369665" watchObservedRunningTime="2025-11-28 06:23:40.392150952 +0000 UTC m=+142.981406522" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.429358 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" podStartSLOduration=123.429340922 podStartE2EDuration="2m3.429340922s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.426847932 +0000 UTC m=+143.016103522" watchObservedRunningTime="2025-11-28 06:23:40.429340922 +0000 UTC m=+143.018596492" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.445291 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kmfw2"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.481794 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.486424 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.486777 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:40.986765018 +0000 UTC m=+143.576020578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.516964 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.520062 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4c6dx" podStartSLOduration=123.520043938 podStartE2EDuration="2m3.520043938s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.484306659 +0000 UTC m=+143.073562249" watchObservedRunningTime="2025-11-28 06:23:40.520043938 +0000 UTC m=+143.109299508" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.522655 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5"] Nov 28 06:23:40 crc kubenswrapper[4955]: W1128 06:23:40.564350 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa934dd3_d47c_454c_80fb_f40124c61e2d.slice/crio-929a39a8f702653869e501e47027e687a65d6bab4d97309a990defa1e678687a WatchSource:0}: Error finding container 929a39a8f702653869e501e47027e687a65d6bab4d97309a990defa1e678687a: Status 404 returned error can't find the container with id 929a39a8f702653869e501e47027e687a65d6bab4d97309a990defa1e678687a Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.569786 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.571076 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vjlfm" podStartSLOduration=123.571058735 podStartE2EDuration="2m3.571058735s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.540979213 +0000 UTC m=+143.130234793" watchObservedRunningTime="2025-11-28 06:23:40.571058735 +0000 UTC m=+143.160314305" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.586987 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.587632 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.087617358 +0000 UTC m=+143.676872928 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.623035 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" podStartSLOduration=124.623011587 podStartE2EDuration="2m4.623011587s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.611137885 +0000 UTC m=+143.200393465" watchObservedRunningTime="2025-11-28 06:23:40.623011587 +0000 UTC m=+143.212267157" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.652704 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.660459 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-bd6qn"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.680885 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.690559 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.690860 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.190848784 +0000 UTC m=+143.780104354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.703843 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" podStartSLOduration=123.703806006 podStartE2EDuration="2m3.703806006s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.66101316 +0000 UTC m=+143.250268740" watchObservedRunningTime="2025-11-28 06:23:40.703806006 +0000 UTC m=+143.293061576" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.705454 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" podStartSLOduration=123.705435302 podStartE2EDuration="2m3.705435302s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.692014986 +0000 UTC m=+143.281270566" watchObservedRunningTime="2025-11-28 06:23:40.705435302 +0000 UTC m=+143.294690872" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.776048 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gn2ld" podStartSLOduration=123.776033586 podStartE2EDuration="2m3.776033586s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:40.737788226 +0000 UTC m=+143.327043836" watchObservedRunningTime="2025-11-28 06:23:40.776033586 +0000 UTC m=+143.365289156" Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.793889 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.794294 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.294280956 +0000 UTC m=+143.883536526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.809567 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.832807 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hg4kn"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.850877 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.865816 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-m27qh"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.867949 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.880268 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.896142 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:40 crc kubenswrapper[4955]: E1128 06:23:40.896552 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.396541385 +0000 UTC m=+143.985796955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:40 crc kubenswrapper[4955]: W1128 06:23:40.908520 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae62707c_beac_4d1f_96e0_fb7d9094a58f.slice/crio-ca98d1850475398ab81b68c3635102f503e6556619f2e72853c4074055dfe4c9 WatchSource:0}: Error finding container ca98d1850475398ab81b68c3635102f503e6556619f2e72853c4074055dfe4c9: Status 404 returned error can't find the container with id ca98d1850475398ab81b68c3635102f503e6556619f2e72853c4074055dfe4c9 Nov 28 06:23:40 crc kubenswrapper[4955]: I1128 06:23:40.985097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" event={"ID":"ae62707c-beac-4d1f-96e0-fb7d9094a58f","Type":"ContainerStarted","Data":"ca98d1850475398ab81b68c3635102f503e6556619f2e72853c4074055dfe4c9"} Nov 28 06:23:40 crc kubenswrapper[4955]: W1128 06:23:40.986488 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44ffa22c_63e2_4eec_90df_aaad3c7cdbe6.slice/crio-3d5bd308295fbb5e0353961c4014c4b333cf5443a2e65f0e4f60451bfe2bb1f5 WatchSource:0}: Error finding container 3d5bd308295fbb5e0353961c4014c4b333cf5443a2e65f0e4f60451bfe2bb1f5: Status 404 returned error can't find the container with id 3d5bd308295fbb5e0353961c4014c4b333cf5443a2e65f0e4f60451bfe2bb1f5 Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.005339 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.005792 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.50577671 +0000 UTC m=+144.095032280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.013036 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" event={"ID":"0e0af4cf-4fb0-4244-a3f1-e31d3028710a","Type":"ContainerStarted","Data":"cf3dfb0ad8fb913a1c824a8558b59c20246ee0b80bb0b312ad0ffe17c1bc3bd0"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.027388 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" event={"ID":"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52","Type":"ContainerStarted","Data":"a8ad7319ece68d6d583f0ee85d68f66f30547c9aecb5545f5d26d3a119933f52"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.078231 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" event={"ID":"0d9779f3-5d4d-4a2c-a1c6-159ae32c360d","Type":"ContainerStarted","Data":"e8926a1edc008a4956094f7828222f249df1070f25cfe9d1c772b206ebf72d42"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.107486 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.107837 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.607826553 +0000 UTC m=+144.197082123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.140893 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pw7x8" event={"ID":"8e298e78-6a12-4148-aba0-25829ecf409c","Type":"ContainerStarted","Data":"a97fed2676ff9b93624cb96ddf8f374c368496fe8583dc7bfc261cb5fea945aa"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.173458 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-j5ss6" podStartSLOduration=124.173443648 podStartE2EDuration="2m4.173443648s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.172546903 +0000 UTC m=+143.761802483" watchObservedRunningTime="2025-11-28 06:23:41.173443648 +0000 UTC m=+143.762699218" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.178227 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlq67" event={"ID":"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8","Type":"ContainerStarted","Data":"a322d11aafd53aaa64f948fb9ef9b08e6b154019fd78459479d6831078af459e"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.195267 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" event={"ID":"df2fb484-2a0d-4283-bdb2-4b7915541845","Type":"ContainerStarted","Data":"af478094957841e90cc376cd9fd57e02fa6907440b89acce5da9f1f640861737"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.208956 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.209050 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.709033483 +0000 UTC m=+144.298289053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.214545 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.215940 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.715923246 +0000 UTC m=+144.305178816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.238601 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" event={"ID":"42eecb9c-abf9-4306-92ee-6cbb96e76068","Type":"ContainerStarted","Data":"67bdc3fd5c196944acf23b65c2e8886a0b38597d950d831220a0e339b2662d96"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.240081 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pw7x8" podStartSLOduration=124.240055741 podStartE2EDuration="2m4.240055741s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.238864527 +0000 UTC m=+143.828120097" watchObservedRunningTime="2025-11-28 06:23:41.240055741 +0000 UTC m=+143.829311311" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.250445 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-t8jwg" event={"ID":"0d337492-faa7-4ee3-beb5-c87ba5b0dc93","Type":"ContainerStarted","Data":"153482f6b396f62d14f4b870acb16ec6b11826cad98d4bb8bac35a39f4da8cbf"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.256132 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mg445" event={"ID":"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb","Type":"ContainerStarted","Data":"dc023a25d4b082d98bb6eecec3c11e866c44ef812f625afd519da0f44ba27322"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.274541 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" event={"ID":"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1","Type":"ContainerStarted","Data":"46e6f71ce7e45f170b9f8c68555392ea3c06c47a1d5b6f502597ba39b0594230"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.289588 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-t8jwg" podStartSLOduration=6.289562685 podStartE2EDuration="6.289562685s" podCreationTimestamp="2025-11-28 06:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.272996582 +0000 UTC m=+143.862252152" watchObservedRunningTime="2025-11-28 06:23:41.289562685 +0000 UTC m=+143.878818255" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.320090 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.320444 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.820420938 +0000 UTC m=+144.409676508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.340909 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" event={"ID":"15e25bb5-0316-4fe0-87de-de00d7c74741","Type":"ContainerStarted","Data":"fb6930125b04529ec8005cf3baa8adfd3ee9a7331688b559f8413c697cde1677"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.341175 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" event={"ID":"15e25bb5-0316-4fe0-87de-de00d7c74741","Type":"ContainerStarted","Data":"dc4d4fcc3a6f5ca9f1a272ca3ff119c88d652bef20915a0d48b74d175cb89e0d"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.355694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" event={"ID":"fa934dd3-d47c-454c-80fb-f40124c61e2d","Type":"ContainerStarted","Data":"929a39a8f702653869e501e47027e687a65d6bab4d97309a990defa1e678687a"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.422606 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" event={"ID":"37176422-d3bf-429f-af47-8dd4e135b40b","Type":"ContainerStarted","Data":"87b4957b66863aabb0ac739c1c451bf5eaed7a0f9d7dcb5b9538f6dfa972f530"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.423629 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.425399 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:41.925353962 +0000 UTC m=+144.514609532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.438736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" event={"ID":"7008cba8-7ab6-4b6d-9340-c7ac6157d59a","Type":"ContainerStarted","Data":"19ceffca6b8aa4453cafd39b2d314a3535a88dacbc24cc629e449df168558961"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.470099 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" event={"ID":"3d633e7c-2b49-443a-9f5c-f5b8d475e399","Type":"ContainerStarted","Data":"114f280e924ab96e77bc32d2b1640df22bf3fbe7389b8d6643fb4daf9088e0a5"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.470720 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" podStartSLOduration=124.4707092 podStartE2EDuration="2m4.4707092s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.466230595 +0000 UTC m=+144.055486175" watchObservedRunningTime="2025-11-28 06:23:41.4707092 +0000 UTC m=+144.059964780" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.473369 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.474132 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.477647 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mg445" podStartSLOduration=125.477633134 podStartE2EDuration="2m5.477633134s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.341462976 +0000 UTC m=+143.930718556" watchObservedRunningTime="2025-11-28 06:23:41.477633134 +0000 UTC m=+144.066888704" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.477873 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:41 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:41 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:41 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.477923 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.502529 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-bqk5b" podStartSLOduration=124.502491179 podStartE2EDuration="2m4.502491179s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.501199343 +0000 UTC m=+144.090454913" watchObservedRunningTime="2025-11-28 06:23:41.502491179 +0000 UTC m=+144.091746749" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.503904 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" event={"ID":"41149789-c52d-44af-9616-92969e6d37c2","Type":"ContainerStarted","Data":"8d90191f9c84a6de42c569e67da688b140929f01286268abdd9877913d0f01a2"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.518986 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmfw2" event={"ID":"ce7cab06-019f-4d6b-82be-e9cacdbddb06","Type":"ContainerStarted","Data":"d6379222270ff624e4d7732a56481ba3b5ddd0451dfbb939291e0e313e03640b"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.525698 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.526011 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.025988186 +0000 UTC m=+144.615243746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.526193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.526687 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.026676365 +0000 UTC m=+144.615931935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.539873 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" podStartSLOduration=124.539856164 podStartE2EDuration="2m4.539856164s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.537069456 +0000 UTC m=+144.126325026" watchObservedRunningTime="2025-11-28 06:23:41.539856164 +0000 UTC m=+144.129111734" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.548156 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" event={"ID":"e4e8a345-0a04-4ad6-b7c7-7805823c4026","Type":"ContainerStarted","Data":"7ba9b9703defedbf11fec4f1382db8a9cad89a9deff9440f9f47d1173eafb1e9"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.548202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" event={"ID":"e4e8a345-0a04-4ad6-b7c7-7805823c4026","Type":"ContainerStarted","Data":"d2bfd566c821629d0d4631041c5d5390d27344aaa39f869b000b75a604c87857"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.564001 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r54d2" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.567412 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kmfw2" podStartSLOduration=6.567395214 podStartE2EDuration="6.567395214s" podCreationTimestamp="2025-11-28 06:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.565776598 +0000 UTC m=+144.155032168" watchObservedRunningTime="2025-11-28 06:23:41.567395214 +0000 UTC m=+144.156650784" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.602950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sxskz" event={"ID":"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af","Type":"ContainerStarted","Data":"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.603001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sxskz" event={"ID":"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af","Type":"ContainerStarted","Data":"791ddc5e2860620d41f1a3ae9081ed6fa949511939215432e19d512c094de13e"} Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.606671 4955 patch_prober.go:28] interesting pod/downloads-7954f5f757-8pl8k container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.606703 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-8pl8k" podUID="44a739c5-de17-458b-ab79-74c4bd74a43b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.626929 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.628480 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.128465681 +0000 UTC m=+144.717721251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.728948 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.731933 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.231918783 +0000 UTC m=+144.821174473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.836317 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.836827 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.336803746 +0000 UTC m=+144.926059316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:41 crc kubenswrapper[4955]: I1128 06:23:41.937838 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:41 crc kubenswrapper[4955]: E1128 06:23:41.938202 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.438188761 +0000 UTC m=+145.027444331 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.039405 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.039532 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.539497024 +0000 UTC m=+145.128752594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.039908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.040224 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.540216654 +0000 UTC m=+145.129472224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.140788 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.141040 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.641012952 +0000 UTC m=+145.230268532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.141097 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.141463 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.641452024 +0000 UTC m=+145.230707584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.155925 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.156214 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.190245 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.190390 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.214089 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.244969 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.245252 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.745238516 +0000 UTC m=+145.334494086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.263920 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sxskz" podStartSLOduration=125.263905488 podStartE2EDuration="2m5.263905488s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:41.654288802 +0000 UTC m=+144.243544372" watchObservedRunningTime="2025-11-28 06:23:42.263905488 +0000 UTC m=+144.853161058" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.348308 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.348832 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.848814153 +0000 UTC m=+145.438069723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.449435 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.449603 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.94956759 +0000 UTC m=+145.538823160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.449738 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.449999 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:42.949987152 +0000 UTC m=+145.539242712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.492711 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:42 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:42 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:42 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.492975 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.551733 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.551878 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.05185813 +0000 UTC m=+145.641113700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.551987 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.552267 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.052259792 +0000 UTC m=+145.641515432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.616698 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" event={"ID":"e4e8a345-0a04-4ad6-b7c7-7805823c4026","Type":"ContainerStarted","Data":"ea6887a6a2184ff3abc64f62bb6f11a8419a1cb7c00a92e68ad8eaf50d4df2f9"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.630769 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" event={"ID":"ae90aa07-e0e4-47ea-8297-449220260a93","Type":"ContainerStarted","Data":"d58e07e7aa5880fe29bcdd12e01a062acf7c7b39a2505f151cf4358d495541a5"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.631104 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" event={"ID":"ae90aa07-e0e4-47ea-8297-449220260a93","Type":"ContainerStarted","Data":"07988497883c80b06670fd03250b466db4e9c38b51249ba2b4286d4261a5bad8"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.639096 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" event={"ID":"aef4e7b4-fffd-4637-9ce9-22264299ad8b","Type":"ContainerStarted","Data":"e9d2eca1b58d561e31f8484c9f4921ae0f0fe8a895829b5b21c5b819b999da51"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.639126 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" event={"ID":"aef4e7b4-fffd-4637-9ce9-22264299ad8b","Type":"ContainerStarted","Data":"d7e9a3196de6d0ddee7e40813bffa9dd22771b58a0ede9dfebbf63be15b1891e"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.639136 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" event={"ID":"aef4e7b4-fffd-4637-9ce9-22264299ad8b","Type":"ContainerStarted","Data":"1c2c8444cb6d3fdebabac69ac862a80a801cbabad766c3bbd83c43fa8a93fc87"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.643490 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-sslmp" podStartSLOduration=125.643479372 podStartE2EDuration="2m5.643479372s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.642780463 +0000 UTC m=+145.232036043" watchObservedRunningTime="2025-11-28 06:23:42.643479372 +0000 UTC m=+145.232734942" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.653175 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.653568 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.153553164 +0000 UTC m=+145.742808734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.656429 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" event={"ID":"15e25bb5-0316-4fe0-87de-de00d7c74741","Type":"ContainerStarted","Data":"94ab44a904526b05a277be1d913c651eb3112b25120bc3867a2c595baa9c023d"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.656552 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.658032 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" event={"ID":"bfde895e-d3ab-4d4c-a5ce-e309f52a7f52","Type":"ContainerStarted","Data":"3bc36e021c573644b460d4cd75476c312fbeb5d1b7f9994c5101fba9b3bff531"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.659050 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" event={"ID":"10eb260e-b06b-4c05-bd29-5cae90517573","Type":"ContainerStarted","Data":"1eb2fa72902dbf1082d1d7aa91c251bbb400204fc3536a525caefb5a8f0053a7"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.659075 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" event={"ID":"10eb260e-b06b-4c05-bd29-5cae90517573","Type":"ContainerStarted","Data":"0ac79d53d57f06a96c7948c2825ad34bd895b0fb4b66711d8b9064999c6f99c6"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.659835 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.671322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" event={"ID":"7a8afa31-175a-4149-aa75-6b68fba36433","Type":"ContainerStarted","Data":"14cbcba951af47fc5ee15262091b7b8fb0b86974127b1d783e4add741cfa3ef2"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.671379 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" event={"ID":"7a8afa31-175a-4149-aa75-6b68fba36433","Type":"ContainerStarted","Data":"142e096b200540b3d1bdef2a964ca5982fcd444f9360a44212962bc3cfb133e0"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.681094 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" event={"ID":"ae62707c-beac-4d1f-96e0-fb7d9094a58f","Type":"ContainerStarted","Data":"2bb8dc4cbac879bd28101eaf76f715711403cf08e0aebb7904b291dd744eb557"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.736367 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pj5dg" podStartSLOduration=125.736352759 podStartE2EDuration="2m5.736352759s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.689426287 +0000 UTC m=+145.278681857" watchObservedRunningTime="2025-11-28 06:23:42.736352759 +0000 UTC m=+145.325608319" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.736532 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" podStartSLOduration=126.736528724 podStartE2EDuration="2m6.736528724s" podCreationTimestamp="2025-11-28 06:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.735147196 +0000 UTC m=+145.324402766" watchObservedRunningTime="2025-11-28 06:23:42.736528724 +0000 UTC m=+145.325784294" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.746749 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" event={"ID":"0791ddeb-3c7c-47c8-8fd5-3ee0fe96b3c1","Type":"ContainerStarted","Data":"9bf823a98d1d12fcba173877a0caa0020b469be01b972086b3bf590b725cc4e7"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.767611 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.769460 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.269446655 +0000 UTC m=+145.858702225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.788706 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-2jjx8" podStartSLOduration=125.788691563 podStartE2EDuration="2m5.788691563s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.786861172 +0000 UTC m=+145.376116752" watchObservedRunningTime="2025-11-28 06:23:42.788691563 +0000 UTC m=+145.377947133" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.799735 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" event={"ID":"41149789-c52d-44af-9616-92969e6d37c2","Type":"ContainerStarted","Data":"98f0b5626f62cd5ca16c334d3a8e57273d5e1324bae4b76862183cb2309b851e"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.815736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" event={"ID":"df2fb484-2a0d-4283-bdb2-4b7915541845","Type":"ContainerStarted","Data":"787a524933539387cb2745b058cef9f0cdf45a72c5a90f757ff15fa1f10359d4"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.829382 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" event={"ID":"0e0af4cf-4fb0-4244-a3f1-e31d3028710a","Type":"ContainerStarted","Data":"0b7e3aa6f374375c8f624c6abfa7d746884a1c552895cf463c576829b3df8c4c"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.830486 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.830859 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" podStartSLOduration=125.830845451 podStartE2EDuration="2m5.830845451s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.828976569 +0000 UTC m=+145.418232149" watchObservedRunningTime="2025-11-28 06:23:42.830845451 +0000 UTC m=+145.420101011" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.850410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kmfw2" event={"ID":"ce7cab06-019f-4d6b-82be-e9cacdbddb06","Type":"ContainerStarted","Data":"9fd084c81367f87a6d4d12b2214559cb0f06fd7b0867e6c6f2817efbd46024fa"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.859520 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.862564 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" event={"ID":"a148b60f-cd30-40a2-938d-133d328901a3","Type":"ContainerStarted","Data":"218bdf1c0ccce2b49b4885e0e0ba3ea013134281f6b02e68ffefaf386317846e"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.862608 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" event={"ID":"a148b60f-cd30-40a2-938d-133d328901a3","Type":"ContainerStarted","Data":"510a1fb094e75a77bd33e3c44ea0623426f0a27197171b920aff69ad1f0e476c"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.871302 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.871513 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.371472207 +0000 UTC m=+145.960727777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.871840 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.872194 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.372184187 +0000 UTC m=+145.961439757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.873452 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" event={"ID":"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8","Type":"ContainerStarted","Data":"6d1679a0f4f290140b59aab80245a6658f25fa02d8d1cf13e17ee96baad9fefc"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.873484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" event={"ID":"e5bfa87c-bbdf-45f6-8161-c4d4d6c7d9d8","Type":"ContainerStarted","Data":"c4ce2cc845098c5ec7f467b7ca4c7c77148c609543157baa120a7b633bf52940"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.891370 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mg445" event={"ID":"5b51a0dc-e121-4ba8-b0be-b01cf8553bfb","Type":"ContainerStarted","Data":"2710db2e5370ceebc16a4877e258e3880fbdbb65b0e909d737a658bd17b518a1"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.893350 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" event={"ID":"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6","Type":"ContainerStarted","Data":"9010462ead1dbaeeaed59ce5e623b261b220200f40eb37aa54f85dd28b2ad2a0"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.893372 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" event={"ID":"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6","Type":"ContainerStarted","Data":"3d5bd308295fbb5e0353961c4014c4b333cf5443a2e65f0e4f60451bfe2bb1f5"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.894003 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.895629 4955 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bf8xd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.895660 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.897746 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-vlq67" event={"ID":"2b1c1c72-e11d-4a6d-8ca7-9a71c61e68d8","Type":"ContainerStarted","Data":"21501b12805595a461662581d240a901446cb1d873606f34582476e1fad7f044"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.898185 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.912687 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" event={"ID":"fa934dd3-d47c-454c-80fb-f40124c61e2d","Type":"ContainerStarted","Data":"8f88f961bf7b7cb99dc38233b375c2273080e7a9a7bbb6332dafe69c16fb849e"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.912744 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" event={"ID":"fa934dd3-d47c-454c-80fb-f40124c61e2d","Type":"ContainerStarted","Data":"4e04aa4100ccb955a6fb577bd913b6549a6588a4802d3a182ed2bd111e294df2"} Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.922648 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tq8t2" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.932524 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghfl6" podStartSLOduration=125.932492774 podStartE2EDuration="2m5.932492774s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.929895531 +0000 UTC m=+145.519151111" watchObservedRunningTime="2025-11-28 06:23:42.932492774 +0000 UTC m=+145.521748344" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.947682 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh4bx" Nov 28 06:23:42 crc kubenswrapper[4955]: I1128 06:23:42.975939 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:42 crc kubenswrapper[4955]: E1128 06:23:42.976952 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.476932887 +0000 UTC m=+146.066188457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.011441 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7ff5w" podStartSLOduration=126.011424481 podStartE2EDuration="2m6.011424481s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.008468578 +0000 UTC m=+145.597724148" watchObservedRunningTime="2025-11-28 06:23:43.011424481 +0000 UTC m=+145.600680051" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.022289 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" podStartSLOduration=126.022266084 podStartE2EDuration="2m6.022266084s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:42.968605544 +0000 UTC m=+145.557861134" watchObservedRunningTime="2025-11-28 06:23:43.022266084 +0000 UTC m=+145.611521654" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.073590 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zdfh5" podStartSLOduration=126.073569739 podStartE2EDuration="2m6.073569739s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.041853622 +0000 UTC m=+145.631109202" watchObservedRunningTime="2025-11-28 06:23:43.073569739 +0000 UTC m=+145.662825309" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.081572 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.089609 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.589592417 +0000 UTC m=+146.178848087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.105658 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hg4kn" podStartSLOduration=126.105638835 podStartE2EDuration="2m6.105638835s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.074192916 +0000 UTC m=+145.663448496" watchObservedRunningTime="2025-11-28 06:23:43.105638835 +0000 UTC m=+145.694894405" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.142293 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-vlq67" podStartSLOduration=8.14227478 podStartE2EDuration="8.14227478s" podCreationTimestamp="2025-11-28 06:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.11223978 +0000 UTC m=+145.701495360" watchObservedRunningTime="2025-11-28 06:23:43.14227478 +0000 UTC m=+145.731530350" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.189714 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qj5n6" podStartSLOduration=126.189699036 podStartE2EDuration="2m6.189699036s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.156061195 +0000 UTC m=+145.745316765" watchObservedRunningTime="2025-11-28 06:23:43.189699036 +0000 UTC m=+145.778954606" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.190048 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-fjld9" podStartSLOduration=126.190044806 podStartE2EDuration="2m6.190044806s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.189288584 +0000 UTC m=+145.778544164" watchObservedRunningTime="2025-11-28 06:23:43.190044806 +0000 UTC m=+145.779300376" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.193050 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.193422 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.69340687 +0000 UTC m=+146.282662430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.295193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.295495 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.795483584 +0000 UTC m=+146.384739154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.351886 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-j4mts" podStartSLOduration=126.351871731 podStartE2EDuration="2m6.351871731s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.350930394 +0000 UTC m=+145.940185974" watchObservedRunningTime="2025-11-28 06:23:43.351871731 +0000 UTC m=+145.941127301" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.396291 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.396646 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.896631122 +0000 UTC m=+146.485886692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.438393 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" podStartSLOduration=126.438375959 podStartE2EDuration="2m6.438375959s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.387104856 +0000 UTC m=+145.976360436" watchObservedRunningTime="2025-11-28 06:23:43.438375959 +0000 UTC m=+146.027631529" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.439160 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-466vn" podStartSLOduration=126.439156641 podStartE2EDuration="2m6.439156641s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.436863357 +0000 UTC m=+146.026118927" watchObservedRunningTime="2025-11-28 06:23:43.439156641 +0000 UTC m=+146.028412211" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.487690 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:43 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:43 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:43 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.487734 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.498685 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.499018 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:43.999007675 +0000 UTC m=+146.588263245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.599769 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.599987 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.099956998 +0000 UTC m=+146.689212568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.600259 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.600593 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.100583485 +0000 UTC m=+146.689839055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.638440 4955 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mg445 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]log ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]etcd ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/max-in-flight-filter ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 28 06:23:43 crc kubenswrapper[4955]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 28 06:23:43 crc kubenswrapper[4955]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/project.openshift.io-projectcache ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/openshift.io-startinformers ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 28 06:23:43 crc kubenswrapper[4955]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 06:23:43 crc kubenswrapper[4955]: livez check failed Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.638495 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mg445" podUID="5b51a0dc-e121-4ba8-b0be-b01cf8553bfb" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.659841 4955 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ctrqm container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.659922 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" podUID="10eb260e-b06b-4c05-bd29-5cae90517573" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.700765 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.700946 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.200921071 +0000 UTC m=+146.790176641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.701034 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.701473 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.201466116 +0000 UTC m=+146.790721676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.802197 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.802402 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.302377288 +0000 UTC m=+146.891632858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.802718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.803129 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.303110918 +0000 UTC m=+146.892366488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.851331 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.852672 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.855944 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.871579 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.877774 4955 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.904186 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.904389 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.404360439 +0000 UTC m=+146.993616009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.904445 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:43 crc kubenswrapper[4955]: E1128 06:23:43.904756 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.40474873 +0000 UTC m=+146.994004300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.929416 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" event={"ID":"7a8afa31-175a-4149-aa75-6b68fba36433","Type":"ContainerStarted","Data":"f836af03632600812cc3fc89efe6c75fb8a86f4e048111f46614f5ef91182c17"} Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.934156 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" event={"ID":"42eecb9c-abf9-4306-92ee-6cbb96e76068","Type":"ContainerStarted","Data":"5c66ee5a2cb934b3af3e172ccb7713b938fdafffddfec89f77d5d7eff21fff2c"} Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.934202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" event={"ID":"42eecb9c-abf9-4306-92ee-6cbb96e76068","Type":"ContainerStarted","Data":"d4d82e020d840a3b7dfbe33ebabf282ed48139de106fb6c8aeba8a1fbdb2ec62"} Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.935316 4955 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bf8xd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.935365 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Nov 28 06:23:43 crc kubenswrapper[4955]: I1128 06:23:43.963172 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-m27qh" podStartSLOduration=126.963153533 podStartE2EDuration="2m6.963153533s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:43.961664222 +0000 UTC m=+146.550919802" watchObservedRunningTime="2025-11-28 06:23:43.963153533 +0000 UTC m=+146.552409103" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.005389 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.005775 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5csd\" (UniqueName: \"kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.005822 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.006216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.009223 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.509201191 +0000 UTC m=+147.098456761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.014234 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ctrqm" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.050987 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.051853 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.062967 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.074089 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.108193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5csd\" (UniqueName: \"kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.108235 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.108308 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.108347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.108629 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.608612101 +0000 UTC m=+147.197867671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.109425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.109655 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.152721 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5csd\" (UniqueName: \"kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd\") pod \"community-operators-svtkn\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.168820 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.209749 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.209922 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.209954 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bz2l\" (UniqueName: \"kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.209993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.210138 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.710125019 +0000 UTC m=+147.299380579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.254818 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.255693 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.273174 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.310973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bz2l\" (UniqueName: \"kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.311029 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.311088 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.311181 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.311636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.312138 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.312365 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.812355598 +0000 UTC m=+147.401611158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.340720 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bz2l\" (UniqueName: \"kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l\") pod \"certified-operators-hrjcd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.381702 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.415446 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.415732 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.415824 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw448\" (UniqueName: \"kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.415853 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.415971 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:44.915954935 +0000 UTC m=+147.505210505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.450584 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.451447 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.461846 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.472679 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:44 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:44 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:44 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.472748 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517372 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517412 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw448\" (UniqueName: \"kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517442 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517472 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517550 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517587 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.517614 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhlcq\" (UniqueName: \"kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.517859 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:45.017849294 +0000 UTC m=+147.607104864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.518380 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.518610 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.540497 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw448\" (UniqueName: \"kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448\") pod \"community-operators-vpjkz\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.582314 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.618165 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.618418 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.618576 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.618615 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhlcq\" (UniqueName: \"kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.619013 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:45.118995962 +0000 UTC m=+147.708251532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.619969 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.620026 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.638037 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.647790 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhlcq\" (UniqueName: \"kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq\") pod \"certified-operators-n686x\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.719318 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.719369 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.719398 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.719422 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.719588 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.719899 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:45.219884673 +0000 UTC m=+147.809140243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.723472 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.723867 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.727978 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.753467 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.793311 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.824576 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.824880 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 06:23:45.324852159 +0000 UTC m=+147.914107729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.825166 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:44 crc kubenswrapper[4955]: E1128 06:23:44.830938 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 06:23:45.330917478 +0000 UTC m=+147.920173048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4lj8g" (UID: "89f41960-5178-4dcf-adaa-823b323397d5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.861634 4955 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T06:23:43.877798667Z","Handler":null,"Name":""} Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.865199 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.874177 4955 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.874212 4955 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 06:23:44 crc kubenswrapper[4955]: W1128 06:23:44.891393 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78355f71_961d_418e_a9d8_5332eb5c0ab1.slice/crio-9c410f987bad993e9a62983af28ae2bb5570b778ca1c5c5f0ddd34fce7ad0d9a WatchSource:0}: Error finding container 9c410f987bad993e9a62983af28ae2bb5570b778ca1c5c5f0ddd34fce7ad0d9a: Status 404 returned error can't find the container with id 9c410f987bad993e9a62983af28ae2bb5570b778ca1c5c5f0ddd34fce7ad0d9a Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.917907 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.927000 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.939190 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.944906 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.948987 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.958712 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.961319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerStarted","Data":"ca437bc92177b48f72412aa5d707a90e5aa1cfe192b95b0da3b87843ba24dbd2"} Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.962097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerStarted","Data":"9c410f987bad993e9a62983af28ae2bb5570b778ca1c5c5f0ddd34fce7ad0d9a"} Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.967416 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" event={"ID":"42eecb9c-abf9-4306-92ee-6cbb96e76068","Type":"ContainerStarted","Data":"8f3c6833d831a27f0e236606518d893067300e52acafaf0eff440d628feaf27e"} Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.970312 4955 generic.go:334] "Generic (PLEG): container finished" podID="ae90aa07-e0e4-47ea-8297-449220260a93" containerID="d58e07e7aa5880fe29bcdd12e01a062acf7c7b39a2505f151cf4358d495541a5" exitCode=0 Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.970363 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" event={"ID":"ae90aa07-e0e4-47ea-8297-449220260a93","Type":"ContainerDied","Data":"d58e07e7aa5880fe29bcdd12e01a062acf7c7b39a2505f151cf4358d495541a5"} Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.990076 4955 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bf8xd container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.990116 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Nov 28 06:23:44 crc kubenswrapper[4955]: I1128 06:23:44.990716 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerStarted","Data":"aef422fdc8a18432642626163142080756b283c16237ec381679b174a8ffc1e9"} Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.028535 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.062862 4955 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.062898 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.074156 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.112898 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4lj8g\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.250765 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:45 crc kubenswrapper[4955]: W1128 06:23:45.356666 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-dca61b4abcfe9a56dbf995eaf94fb85bf0b2d92d745934fd1e3fa54ccabb5ff9 WatchSource:0}: Error finding container dca61b4abcfe9a56dbf995eaf94fb85bf0b2d92d745934fd1e3fa54ccabb5ff9: Status 404 returned error can't find the container with id dca61b4abcfe9a56dbf995eaf94fb85bf0b2d92d745934fd1e3fa54ccabb5ff9 Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.459283 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.470334 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:45 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:45 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:45 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.470401 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:45 crc kubenswrapper[4955]: W1128 06:23:45.528604 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d433298f3f1087703758f7cfbd097bf4fa235ed551e26555afca2c60fcecd615 WatchSource:0}: Error finding container d433298f3f1087703758f7cfbd097bf4fa235ed551e26555afca2c60fcecd615: Status 404 returned error can't find the container with id d433298f3f1087703758f7cfbd097bf4fa235ed551e26555afca2c60fcecd615 Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.711030 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.840578 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.841926 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.844430 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.849186 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.941523 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.941565 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.941617 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6f4\" (UniqueName: \"kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.997395 4955 generic.go:334] "Generic (PLEG): container finished" podID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerID="0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775" exitCode=0 Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.997481 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerDied","Data":"0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775"} Nov 28 06:23:45 crc kubenswrapper[4955]: I1128 06:23:45.999154 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.000433 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bf83bd956325053fb2b8ed1478acba69ec338bd7aaed5a8da574b8a532699973"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.000466 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2edf239492e135d5b8bf5c72aeb8346201d0676e602435179e7c5e5c10cc097b"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.002376 4955 generic.go:334] "Generic (PLEG): container finished" podID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerID="f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706" exitCode=0 Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.002499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerDied","Data":"f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.002600 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerStarted","Data":"47960bdcc8fb96cfbd7661d1db4f0bc697f6867af5470cdb2f92e1f7de601dff"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.008211 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" event={"ID":"42eecb9c-abf9-4306-92ee-6cbb96e76068","Type":"ContainerStarted","Data":"b51e9619f75b2ae2fa6d82cc58ebfc5161ea9499996f5d7f878b0bce63788388"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.009969 4955 generic.go:334] "Generic (PLEG): container finished" podID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerID="79b6ce7cd055af5ec7f80e5d4a86501dd25154519891304e65caedd9c956bf10" exitCode=0 Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.010064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerDied","Data":"79b6ce7cd055af5ec7f80e5d4a86501dd25154519891304e65caedd9c956bf10"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.013629 4955 generic.go:334] "Generic (PLEG): container finished" podID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerID="6574f1f17c3ac5814dfcb3cc443fd99ab34980597879a2ad915d413270b51fa8" exitCode=0 Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.013717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerDied","Data":"6574f1f17c3ac5814dfcb3cc443fd99ab34980597879a2ad915d413270b51fa8"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.018951 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" event={"ID":"89f41960-5178-4dcf-adaa-823b323397d5","Type":"ContainerStarted","Data":"15f5a26a93ff663bf0aa49b36bc8c13a37403a476538890d35e2cdc00bbd3870"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.018998 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" event={"ID":"89f41960-5178-4dcf-adaa-823b323397d5","Type":"ContainerStarted","Data":"c705b0303688e8087461f2283996e9623c639697ae5fc8df0a80d214e620ca36"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.019207 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.022664 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9ddac45285038589cf3b3ef4e74472d373607b4eee77777045c2db3f924baee7"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.022717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dca61b4abcfe9a56dbf995eaf94fb85bf0b2d92d745934fd1e3fa54ccabb5ff9"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.022932 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.027263 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8eb246e0ce96581b567a53bae4fc6614504cb7babc21d7cd1b3fe61ab6f69649"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.027333 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d433298f3f1087703758f7cfbd097bf4fa235ed551e26555afca2c60fcecd615"} Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.041615 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-bd6qn" podStartSLOduration=11.04160038 podStartE2EDuration="11.04160038s" podCreationTimestamp="2025-11-28 06:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:46.038969437 +0000 UTC m=+148.628225007" watchObservedRunningTime="2025-11-28 06:23:46.04160038 +0000 UTC m=+148.630855950" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.043842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.043888 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.043964 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6f4\" (UniqueName: \"kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.044446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.044938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.085320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6f4\" (UniqueName: \"kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4\") pod \"redhat-marketplace-mlp4t\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.159078 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.245962 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" podStartSLOduration=129.245944744 podStartE2EDuration="2m9.245944744s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:46.210823332 +0000 UTC m=+148.800078922" watchObservedRunningTime="2025-11-28 06:23:46.245944744 +0000 UTC m=+148.835200314" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.246949 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.247896 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.261438 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.353316 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9qf\" (UniqueName: \"kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.353371 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.353428 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.392107 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.403813 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454350 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwcw9\" (UniqueName: \"kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9\") pod \"ae90aa07-e0e4-47ea-8297-449220260a93\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454417 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume\") pod \"ae90aa07-e0e4-47ea-8297-449220260a93\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454484 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume\") pod \"ae90aa07-e0e4-47ea-8297-449220260a93\" (UID: \"ae90aa07-e0e4-47ea-8297-449220260a93\") " Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454650 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9qf\" (UniqueName: \"kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454697 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.454765 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.455186 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.455806 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.456112 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae90aa07-e0e4-47ea-8297-449220260a93" (UID: "ae90aa07-e0e4-47ea-8297-449220260a93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.462204 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9" (OuterVolumeSpecName: "kube-api-access-kwcw9") pod "ae90aa07-e0e4-47ea-8297-449220260a93" (UID: "ae90aa07-e0e4-47ea-8297-449220260a93"). InnerVolumeSpecName "kube-api-access-kwcw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.463173 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae90aa07-e0e4-47ea-8297-449220260a93" (UID: "ae90aa07-e0e4-47ea-8297-449220260a93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.479636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9qf\" (UniqueName: \"kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf\") pod \"redhat-marketplace-4rnck\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.482941 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:46 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:46 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:46 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.483012 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.556384 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae90aa07-e0e4-47ea-8297-449220260a93-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.556428 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwcw9\" (UniqueName: \"kubernetes.io/projected/ae90aa07-e0e4-47ea-8297-449220260a93-kube-api-access-kwcw9\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.556440 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae90aa07-e0e4-47ea-8297-449220260a93-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.585744 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:23:46 crc kubenswrapper[4955]: I1128 06:23:46.981710 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:23:47 crc kubenswrapper[4955]: W1128 06:23:47.002137 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd982a2fd_ea0e_45cb_8a06_d6f08855b5f6.slice/crio-1da65619e7ddfef8ae40e821e784a18695e8d6f9cefb760df5afd32e13b518ab WatchSource:0}: Error finding container 1da65619e7ddfef8ae40e821e784a18695e8d6f9cefb760df5afd32e13b518ab: Status 404 returned error can't find the container with id 1da65619e7ddfef8ae40e821e784a18695e8d6f9cefb760df5afd32e13b518ab Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.037963 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:23:47 crc kubenswrapper[4955]: E1128 06:23:47.038186 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae90aa07-e0e4-47ea-8297-449220260a93" containerName="collect-profiles" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.038207 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae90aa07-e0e4-47ea-8297-449220260a93" containerName="collect-profiles" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.038327 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae90aa07-e0e4-47ea-8297-449220260a93" containerName="collect-profiles" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.039230 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.041277 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.041834 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerStarted","Data":"1da65619e7ddfef8ae40e821e784a18695e8d6f9cefb760df5afd32e13b518ab"} Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.043782 4955 generic.go:334] "Generic (PLEG): container finished" podID="29481cf1-0690-4067-b85d-b753b59d584d" containerID="711b223c15ecf8bcc915c1cd16b4a2e3eeda1dfe937f58c3eeadb8a2bc0cf8fa" exitCode=0 Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.043825 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerDied","Data":"711b223c15ecf8bcc915c1cd16b4a2e3eeda1dfe937f58c3eeadb8a2bc0cf8fa"} Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.043842 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerStarted","Data":"c8cd692b52c66961566c4f968cdd19e680b60749510038c1d61068487d661b2a"} Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.051679 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.055396 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.057706 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm" event={"ID":"ae90aa07-e0e4-47ea-8297-449220260a93","Type":"ContainerDied","Data":"07988497883c80b06670fd03250b466db4e9c38b51249ba2b4286d4261a5bad8"} Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.057737 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07988497883c80b06670fd03250b466db4e9c38b51249ba2b4286d4261a5bad8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.097051 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.097716 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.100438 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.102797 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.119496 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.156241 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.160340 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mg445" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.165225 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.165338 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.165459 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.165634 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.166631 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn57d\" (UniqueName: \"kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.268412 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.268589 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.268650 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn57d\" (UniqueName: \"kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.268679 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.268701 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.270014 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.288096 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.304543 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.310497 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-8pl8k" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.314142 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.316095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn57d\" (UniqueName: \"kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d\") pod \"redhat-operators-jhnw8\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.360626 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.436983 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.438335 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.465182 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.470567 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.470773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.471490 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9zwf\" (UniqueName: \"kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.476732 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:47 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:47 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:47 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.476773 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.555083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.582145 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.582205 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.582225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9zwf\" (UniqueName: \"kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.583196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.583404 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.606855 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9zwf\" (UniqueName: \"kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf\") pod \"redhat-operators-t4fkz\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.784553 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.787257 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:23:47 crc kubenswrapper[4955]: W1128 06:23:47.857054 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ff587c_9685_4dd4_9fb4_44f1f640b5c6.slice/crio-22265d6e235f4df70bdb51ce611512ab4190dc0c2168ad96c10d0d0b41001d09 WatchSource:0}: Error finding container 22265d6e235f4df70bdb51ce611512ab4190dc0c2168ad96c10d0d0b41001d09: Status 404 returned error can't find the container with id 22265d6e235f4df70bdb51ce611512ab4190dc0c2168ad96c10d0d0b41001d09 Nov 28 06:23:47 crc kubenswrapper[4955]: I1128 06:23:47.866213 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.071295 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.073078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerStarted","Data":"22265d6e235f4df70bdb51ce611512ab4190dc0c2168ad96c10d0d0b41001d09"} Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.079210 4955 generic.go:334] "Generic (PLEG): container finished" podID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerID="a191cc736ebbc9adff9cac3ba3b12a851d7b06d020dad2141a5df8670f44777b" exitCode=0 Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.079371 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerDied","Data":"a191cc736ebbc9adff9cac3ba3b12a851d7b06d020dad2141a5df8670f44777b"} Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.083210 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3abef0e-1c23-43be-9f7e-3b724a8fd411","Type":"ContainerStarted","Data":"9ef7f98f69cc056a5865782f53c13871bc5ad116f790898e7b1de020cae3587a"} Nov 28 06:23:48 crc kubenswrapper[4955]: W1128 06:23:48.086837 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a63cf6_5c16_4f9d_9da2_30a613c6b20a.slice/crio-a88457ba944b6ecf8fd2c6b95828bd257738e2c8a9e398892d77e3b0beb43ae6 WatchSource:0}: Error finding container a88457ba944b6ecf8fd2c6b95828bd257738e2c8a9e398892d77e3b0beb43ae6: Status 404 returned error can't find the container with id a88457ba944b6ecf8fd2c6b95828bd257738e2c8a9e398892d77e3b0beb43ae6 Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.336712 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.336757 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.337744 4955 patch_prober.go:28] interesting pod/console-f9d7485db-sxskz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.337777 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sxskz" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.468944 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.471906 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:48 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:48 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:48 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.471939 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.477925 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.478563 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.484044 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.484232 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.486909 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.544568 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.627298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.627492 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.729448 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.729574 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.729586 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.756178 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:48 crc kubenswrapper[4955]: I1128 06:23:48.877714 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.090440 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerID="834494de76a4715b1db3e2f182be5a35c97eab420e6a58c9384ae091af116ef5" exitCode=0 Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.090634 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerDied","Data":"834494de76a4715b1db3e2f182be5a35c97eab420e6a58c9384ae091af116ef5"} Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.092947 4955 generic.go:334] "Generic (PLEG): container finished" podID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerID="aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57" exitCode=0 Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.092981 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerDied","Data":"aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57"} Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.092996 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerStarted","Data":"a88457ba944b6ecf8fd2c6b95828bd257738e2c8a9e398892d77e3b0beb43ae6"} Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.097454 4955 generic.go:334] "Generic (PLEG): container finished" podID="c3abef0e-1c23-43be-9f7e-3b724a8fd411" containerID="43bbfe183485fcc271b48387e6e92025eea6f7126ab83688f8f12678a867f1e9" exitCode=0 Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.097479 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3abef0e-1c23-43be-9f7e-3b724a8fd411","Type":"ContainerDied","Data":"43bbfe183485fcc271b48387e6e92025eea6f7126ab83688f8f12678a867f1e9"} Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.153752 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 06:23:49 crc kubenswrapper[4955]: W1128 06:23:49.186583 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod92f52c2e_6356_40a2_8c94_09c8133bff4b.slice/crio-20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6 WatchSource:0}: Error finding container 20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6: Status 404 returned error can't find the container with id 20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6 Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.470071 4955 patch_prober.go:28] interesting pod/router-default-5444994796-pw7x8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 06:23:49 crc kubenswrapper[4955]: [-]has-synced failed: reason withheld Nov 28 06:23:49 crc kubenswrapper[4955]: [+]process-running ok Nov 28 06:23:49 crc kubenswrapper[4955]: healthz check failed Nov 28 06:23:49 crc kubenswrapper[4955]: I1128 06:23:49.470150 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pw7x8" podUID="8e298e78-6a12-4148-aba0-25829ecf409c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.111631 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92f52c2e-6356-40a2-8c94-09c8133bff4b","Type":"ContainerStarted","Data":"789e531a1a874f79acbf544c21bccc4d0bb54b55b105c0975b553f6ebe0333c1"} Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.111666 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92f52c2e-6356-40a2-8c94-09c8133bff4b","Type":"ContainerStarted","Data":"20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6"} Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.125063 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.125024251 podStartE2EDuration="2.125024251s" podCreationTimestamp="2025-11-28 06:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:23:50.123360995 +0000 UTC m=+152.712616565" watchObservedRunningTime="2025-11-28 06:23:50.125024251 +0000 UTC m=+152.714279821" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.238849 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-vlq67" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.410753 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.470843 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.477105 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pw7x8" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.554059 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir\") pod \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.554177 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access\") pod \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\" (UID: \"c3abef0e-1c23-43be-9f7e-3b724a8fd411\") " Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.555080 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c3abef0e-1c23-43be-9f7e-3b724a8fd411" (UID: "c3abef0e-1c23-43be-9f7e-3b724a8fd411"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.561668 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c3abef0e-1c23-43be-9f7e-3b724a8fd411" (UID: "c3abef0e-1c23-43be-9f7e-3b724a8fd411"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.655909 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:50 crc kubenswrapper[4955]: I1128 06:23:50.655939 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3abef0e-1c23-43be-9f7e-3b724a8fd411-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:51 crc kubenswrapper[4955]: I1128 06:23:51.120698 4955 generic.go:334] "Generic (PLEG): container finished" podID="92f52c2e-6356-40a2-8c94-09c8133bff4b" containerID="789e531a1a874f79acbf544c21bccc4d0bb54b55b105c0975b553f6ebe0333c1" exitCode=0 Nov 28 06:23:51 crc kubenswrapper[4955]: I1128 06:23:51.120791 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92f52c2e-6356-40a2-8c94-09c8133bff4b","Type":"ContainerDied","Data":"789e531a1a874f79acbf544c21bccc4d0bb54b55b105c0975b553f6ebe0333c1"} Nov 28 06:23:51 crc kubenswrapper[4955]: I1128 06:23:51.125427 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 06:23:51 crc kubenswrapper[4955]: I1128 06:23:51.130124 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c3abef0e-1c23-43be-9f7e-3b724a8fd411","Type":"ContainerDied","Data":"9ef7f98f69cc056a5865782f53c13871bc5ad116f790898e7b1de020cae3587a"} Nov 28 06:23:51 crc kubenswrapper[4955]: I1128 06:23:51.131816 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef7f98f69cc056a5865782f53c13871bc5ad116f790898e7b1de020cae3587a" Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.419909 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.482252 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir\") pod \"92f52c2e-6356-40a2-8c94-09c8133bff4b\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.482316 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access\") pod \"92f52c2e-6356-40a2-8c94-09c8133bff4b\" (UID: \"92f52c2e-6356-40a2-8c94-09c8133bff4b\") " Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.482382 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92f52c2e-6356-40a2-8c94-09c8133bff4b" (UID: "92f52c2e-6356-40a2-8c94-09c8133bff4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.482552 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92f52c2e-6356-40a2-8c94-09c8133bff4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.498889 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92f52c2e-6356-40a2-8c94-09c8133bff4b" (UID: "92f52c2e-6356-40a2-8c94-09c8133bff4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:23:52 crc kubenswrapper[4955]: I1128 06:23:52.583240 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92f52c2e-6356-40a2-8c94-09c8133bff4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:23:53 crc kubenswrapper[4955]: I1128 06:23:53.138427 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"92f52c2e-6356-40a2-8c94-09c8133bff4b","Type":"ContainerDied","Data":"20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6"} Nov 28 06:23:53 crc kubenswrapper[4955]: I1128 06:23:53.138468 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20555f0860415cb47da62e27f76f63fefec7ef679ba698b65589a1bb92452bc6" Nov 28 06:23:53 crc kubenswrapper[4955]: I1128 06:23:53.138483 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 06:23:53 crc kubenswrapper[4955]: I1128 06:23:53.393148 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:23:53 crc kubenswrapper[4955]: I1128 06:23:53.393428 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:23:58 crc kubenswrapper[4955]: I1128 06:23:58.340684 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:58 crc kubenswrapper[4955]: I1128 06:23:58.344012 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:23:59 crc kubenswrapper[4955]: I1128 06:23:59.914577 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:59 crc kubenswrapper[4955]: I1128 06:23:59.923283 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/483773b2-23ab-4ebe-8111-f553a0c95523-metrics-certs\") pod \"network-metrics-daemon-mhptq\" (UID: \"483773b2-23ab-4ebe-8111-f553a0c95523\") " pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:23:59 crc kubenswrapper[4955]: I1128 06:23:59.923680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mhptq" Nov 28 06:24:05 crc kubenswrapper[4955]: I1128 06:24:05.421090 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:24:16 crc kubenswrapper[4955]: E1128 06:24:16.415284 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:24:16 crc kubenswrapper[4955]: E1128 06:24:16.416128 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5csd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-svtkn_openshift-marketplace(bd3aeed8-258b-459f-bb90-be61ddf70b91): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:24:16 crc kubenswrapper[4955]: E1128 06:24:16.417292 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-svtkn" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.598639 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-svtkn" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.691880 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.692078 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8bz2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hrjcd_openshift-marketplace(f1a74a4b-b614-48f9-bc76-26f457ae5acd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.693301 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hrjcd" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.725695 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.726075 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zw448,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vpjkz_openshift-marketplace(78355f71-961d-418e-a9d8-5332eb5c0ab1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.727136 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vpjkz" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.734400 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.734535 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jn57d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jhnw8_openshift-marketplace(a4ff587c-9685-4dd4-9fb4-44f1f640b5c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.735791 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jhnw8" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.743254 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.743435 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7s9qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4rnck_openshift-marketplace(d982a2fd-ea0e-45cb-8a06-d6f08855b5f6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 06:24:17 crc kubenswrapper[4955]: E1128 06:24:17.744541 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4rnck" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.031203 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mhptq"] Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.334862 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhptq" event={"ID":"483773b2-23ab-4ebe-8111-f553a0c95523","Type":"ContainerStarted","Data":"1d5fa01aa5a6f20173785231c8f49a58e4727ac67abea44e55f52eaac20bf6c4"} Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.334920 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhptq" event={"ID":"483773b2-23ab-4ebe-8111-f553a0c95523","Type":"ContainerStarted","Data":"5cec4c191276c4593cb573c1eaf1526ea9319c8eff218959bb3a194479eb4412"} Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.338123 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerStarted","Data":"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64"} Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.341914 4955 generic.go:334] "Generic (PLEG): container finished" podID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerID="98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b" exitCode=0 Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.342001 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerDied","Data":"98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b"} Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.358255 4955 generic.go:334] "Generic (PLEG): container finished" podID="29481cf1-0690-4067-b85d-b753b59d584d" containerID="64df6fff9348bf88e12210b17e569aa800fbda367e403c5a7101c1e2ead25cdf" exitCode=0 Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.358444 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerDied","Data":"64df6fff9348bf88e12210b17e569aa800fbda367e403c5a7101c1e2ead25cdf"} Nov 28 06:24:18 crc kubenswrapper[4955]: E1128 06:24:18.361772 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hrjcd" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" Nov 28 06:24:18 crc kubenswrapper[4955]: E1128 06:24:18.364470 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4rnck" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" Nov 28 06:24:18 crc kubenswrapper[4955]: E1128 06:24:18.365073 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vpjkz" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" Nov 28 06:24:18 crc kubenswrapper[4955]: E1128 06:24:18.372225 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jhnw8" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" Nov 28 06:24:18 crc kubenswrapper[4955]: I1128 06:24:18.454022 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-r6gpz" Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.365899 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mhptq" event={"ID":"483773b2-23ab-4ebe-8111-f553a0c95523","Type":"ContainerStarted","Data":"38a9018ffb1e912bfaf697aeb1ab057f83b007d35e097d655fb862a002792d2a"} Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.369099 4955 generic.go:334] "Generic (PLEG): container finished" podID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerID="8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64" exitCode=0 Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.369304 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerDied","Data":"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64"} Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.371997 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerStarted","Data":"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c"} Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.377882 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerStarted","Data":"83487ab3468a2af9c2653ecedb977192561af661d9cc3a272749b5a910cb2f97"} Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.383448 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mhptq" podStartSLOduration=162.383430571 podStartE2EDuration="2m42.383430571s" podCreationTimestamp="2025-11-28 06:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:24:19.3826837 +0000 UTC m=+181.971939280" watchObservedRunningTime="2025-11-28 06:24:19.383430571 +0000 UTC m=+181.972686161" Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.423601 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n686x" podStartSLOduration=2.37560321 podStartE2EDuration="35.423581284s" podCreationTimestamp="2025-11-28 06:23:44 +0000 UTC" firstStartedPulling="2025-11-28 06:23:46.003479844 +0000 UTC m=+148.592735414" lastFinishedPulling="2025-11-28 06:24:19.051457878 +0000 UTC m=+181.640713488" observedRunningTime="2025-11-28 06:24:19.419696685 +0000 UTC m=+182.008952275" watchObservedRunningTime="2025-11-28 06:24:19.423581284 +0000 UTC m=+182.012836854" Nov 28 06:24:19 crc kubenswrapper[4955]: I1128 06:24:19.459328 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mlp4t" podStartSLOduration=2.386512207 podStartE2EDuration="34.459313363s" podCreationTimestamp="2025-11-28 06:23:45 +0000 UTC" firstStartedPulling="2025-11-28 06:23:47.050576554 +0000 UTC m=+149.639832124" lastFinishedPulling="2025-11-28 06:24:19.12337768 +0000 UTC m=+181.712633280" observedRunningTime="2025-11-28 06:24:19.456810153 +0000 UTC m=+182.046065723" watchObservedRunningTime="2025-11-28 06:24:19.459313363 +0000 UTC m=+182.048568933" Nov 28 06:24:20 crc kubenswrapper[4955]: I1128 06:24:20.385582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerStarted","Data":"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0"} Nov 28 06:24:20 crc kubenswrapper[4955]: I1128 06:24:20.400527 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4fkz" podStartSLOduration=2.573563821 podStartE2EDuration="33.400497021s" podCreationTimestamp="2025-11-28 06:23:47 +0000 UTC" firstStartedPulling="2025-11-28 06:23:49.094077753 +0000 UTC m=+151.683333323" lastFinishedPulling="2025-11-28 06:24:19.921010943 +0000 UTC m=+182.510266523" observedRunningTime="2025-11-28 06:24:20.398605988 +0000 UTC m=+182.987861568" watchObservedRunningTime="2025-11-28 06:24:20.400497021 +0000 UTC m=+182.989752591" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.244248 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:24:22 crc kubenswrapper[4955]: E1128 06:24:22.245576 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3abef0e-1c23-43be-9f7e-3b724a8fd411" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.245627 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3abef0e-1c23-43be-9f7e-3b724a8fd411" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: E1128 06:24:22.245648 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f52c2e-6356-40a2-8c94-09c8133bff4b" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.245656 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f52c2e-6356-40a2-8c94-09c8133bff4b" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.246053 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f52c2e-6356-40a2-8c94-09c8133bff4b" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.246079 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3abef0e-1c23-43be-9f7e-3b724a8fd411" containerName="pruner" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.247542 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.250651 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.252810 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.265433 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.265612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.265661 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.366363 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.366408 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.366498 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.384456 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.582266 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:22 crc kubenswrapper[4955]: I1128 06:24:22.796409 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 06:24:23 crc kubenswrapper[4955]: I1128 06:24:23.393393 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:24:23 crc kubenswrapper[4955]: I1128 06:24:23.393805 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:24:23 crc kubenswrapper[4955]: I1128 06:24:23.402376 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6","Type":"ContainerStarted","Data":"496328e82d4d93f6bf0182e5bf60099efd9598848b29745ac67e3cef354e2196"} Nov 28 06:24:23 crc kubenswrapper[4955]: I1128 06:24:23.402451 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6","Type":"ContainerStarted","Data":"fa5c08f44518adc64ade7da33ec499f282e0892ff96ee2165beedb2037a6d53c"} Nov 28 06:24:23 crc kubenswrapper[4955]: I1128 06:24:23.419934 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.419908079 podStartE2EDuration="1.419908079s" podCreationTimestamp="2025-11-28 06:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:24:23.415969419 +0000 UTC m=+186.005225019" watchObservedRunningTime="2025-11-28 06:24:23.419908079 +0000 UTC m=+186.009163669" Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.409271 4955 generic.go:334] "Generic (PLEG): container finished" podID="a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" containerID="496328e82d4d93f6bf0182e5bf60099efd9598848b29745ac67e3cef354e2196" exitCode=0 Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.409311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6","Type":"ContainerDied","Data":"496328e82d4d93f6bf0182e5bf60099efd9598848b29745ac67e3cef354e2196"} Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.793968 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.794263 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.863702 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:24 crc kubenswrapper[4955]: I1128 06:24:24.953243 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.456327 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.629567 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.713075 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access\") pod \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.713133 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir\") pod \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\" (UID: \"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6\") " Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.713275 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" (UID: "a8eb697b-d1ba-4cb7-91a3-c7258b8818d6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.713475 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.721024 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" (UID: "a8eb697b-d1ba-4cb7-91a3-c7258b8818d6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.816182 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8eb697b-d1ba-4cb7-91a3-c7258b8818d6-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:25 crc kubenswrapper[4955]: I1128 06:24:25.907974 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.159873 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.160144 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.198994 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.419440 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a8eb697b-d1ba-4cb7-91a3-c7258b8818d6","Type":"ContainerDied","Data":"fa5c08f44518adc64ade7da33ec499f282e0892ff96ee2165beedb2037a6d53c"} Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.419752 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5c08f44518adc64ade7da33ec499f282e0892ff96ee2165beedb2037a6d53c" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.419651 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.463043 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:24:26 crc kubenswrapper[4955]: I1128 06:24:26.759928 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.242149 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:24:27 crc kubenswrapper[4955]: E1128 06:24:27.244472 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" containerName="pruner" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.244518 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" containerName="pruner" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.244712 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8eb697b-d1ba-4cb7-91a3-c7258b8818d6" containerName="pruner" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.245134 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.248769 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.248961 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.260960 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.341967 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.342179 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.342240 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.422723 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n686x" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="registry-server" containerID="cri-o://f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c" gracePeriod=2 Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.443449 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.443559 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.443671 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.443635 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.443718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.785978 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.786051 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.813351 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access\") pod \"installer-9-crc\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.828716 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:27 crc kubenswrapper[4955]: I1128 06:24:27.880735 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.130412 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 06:24:28 crc kubenswrapper[4955]: W1128 06:24:28.160723 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod76e628a5_2b40_40b9_a4d1_29ff689b1096.slice/crio-30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05 WatchSource:0}: Error finding container 30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05: Status 404 returned error can't find the container with id 30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05 Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.252132 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.353703 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content\") pod \"cc2de0d3-1a18-4cdc-9377-17bac629998c\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.353964 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhlcq\" (UniqueName: \"kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq\") pod \"cc2de0d3-1a18-4cdc-9377-17bac629998c\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.354048 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities\") pod \"cc2de0d3-1a18-4cdc-9377-17bac629998c\" (UID: \"cc2de0d3-1a18-4cdc-9377-17bac629998c\") " Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.354755 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities" (OuterVolumeSpecName: "utilities") pod "cc2de0d3-1a18-4cdc-9377-17bac629998c" (UID: "cc2de0d3-1a18-4cdc-9377-17bac629998c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.363548 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq" (OuterVolumeSpecName: "kube-api-access-qhlcq") pod "cc2de0d3-1a18-4cdc-9377-17bac629998c" (UID: "cc2de0d3-1a18-4cdc-9377-17bac629998c"). InnerVolumeSpecName "kube-api-access-qhlcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.425190 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc2de0d3-1a18-4cdc-9377-17bac629998c" (UID: "cc2de0d3-1a18-4cdc-9377-17bac629998c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.431707 4955 generic.go:334] "Generic (PLEG): container finished" podID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerID="f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c" exitCode=0 Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.431781 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerDied","Data":"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c"} Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.431813 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n686x" event={"ID":"cc2de0d3-1a18-4cdc-9377-17bac629998c","Type":"ContainerDied","Data":"47960bdcc8fb96cfbd7661d1db4f0bc697f6867af5470cdb2f92e1f7de601dff"} Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.431829 4955 scope.go:117] "RemoveContainer" containerID="f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.431948 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n686x" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.433705 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"76e628a5-2b40-40b9-a4d1-29ff689b1096","Type":"ContainerStarted","Data":"30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05"} Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.453980 4955 scope.go:117] "RemoveContainer" containerID="98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.455067 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.455086 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc2de0d3-1a18-4cdc-9377-17bac629998c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.455096 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhlcq\" (UniqueName: \"kubernetes.io/projected/cc2de0d3-1a18-4cdc-9377-17bac629998c-kube-api-access-qhlcq\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.462787 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.465895 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n686x"] Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.477007 4955 scope.go:117] "RemoveContainer" containerID="f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.491520 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.494810 4955 scope.go:117] "RemoveContainer" containerID="f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c" Nov 28 06:24:28 crc kubenswrapper[4955]: E1128 06:24:28.495253 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c\": container with ID starting with f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c not found: ID does not exist" containerID="f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.495292 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c"} err="failed to get container status \"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c\": rpc error: code = NotFound desc = could not find container \"f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c\": container with ID starting with f51fbd996efce0ab772d42fd18c6c608528a995e2df231e1a4feaec0e90f5f1c not found: ID does not exist" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.495343 4955 scope.go:117] "RemoveContainer" containerID="98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b" Nov 28 06:24:28 crc kubenswrapper[4955]: E1128 06:24:28.496192 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b\": container with ID starting with 98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b not found: ID does not exist" containerID="98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.496235 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b"} err="failed to get container status \"98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b\": rpc error: code = NotFound desc = could not find container \"98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b\": container with ID starting with 98f084ebc3448b27110c5255195a6834568b2a3259f12aa40f566a9627a11d8b not found: ID does not exist" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.496264 4955 scope.go:117] "RemoveContainer" containerID="f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706" Nov 28 06:24:28 crc kubenswrapper[4955]: E1128 06:24:28.496549 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706\": container with ID starting with f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706 not found: ID does not exist" containerID="f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706" Nov 28 06:24:28 crc kubenswrapper[4955]: I1128 06:24:28.496578 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706"} err="failed to get container status \"f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706\": rpc error: code = NotFound desc = could not find container \"f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706\": container with ID starting with f9b5245f1859bbc531bcd08293326c1f44fcb5d2d714813f30df1e93e7311706 not found: ID does not exist" Nov 28 06:24:29 crc kubenswrapper[4955]: I1128 06:24:29.441584 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"76e628a5-2b40-40b9-a4d1-29ff689b1096","Type":"ContainerStarted","Data":"07c03cbbfa8d83ebbe79d6bbd2c520ec768548c4e4ad4b934d90a9df5367c9e2"} Nov 28 06:24:29 crc kubenswrapper[4955]: I1128 06:24:29.458073 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.458052108 podStartE2EDuration="2.458052108s" podCreationTimestamp="2025-11-28 06:24:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:24:29.453879681 +0000 UTC m=+192.043135261" watchObservedRunningTime="2025-11-28 06:24:29.458052108 +0000 UTC m=+192.047307668" Nov 28 06:24:29 crc kubenswrapper[4955]: I1128 06:24:29.713543 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" path="/var/lib/kubelet/pods/cc2de0d3-1a18-4cdc-9377-17bac629998c/volumes" Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.564543 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.564795 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4fkz" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="registry-server" containerID="cri-o://f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0" gracePeriod=2 Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.963116 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.985059 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities\") pod \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.985203 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9zwf\" (UniqueName: \"kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf\") pod \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.985235 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content\") pod \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\" (UID: \"29a63cf6-5c16-4f9d-9da2-30a613c6b20a\") " Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.985840 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities" (OuterVolumeSpecName: "utilities") pod "29a63cf6-5c16-4f9d-9da2-30a613c6b20a" (UID: "29a63cf6-5c16-4f9d-9da2-30a613c6b20a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:30 crc kubenswrapper[4955]: I1128 06:24:30.991242 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf" (OuterVolumeSpecName: "kube-api-access-x9zwf") pod "29a63cf6-5c16-4f9d-9da2-30a613c6b20a" (UID: "29a63cf6-5c16-4f9d-9da2-30a613c6b20a"). InnerVolumeSpecName "kube-api-access-x9zwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.087235 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9zwf\" (UniqueName: \"kubernetes.io/projected/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-kube-api-access-x9zwf\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.087494 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.222851 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29a63cf6-5c16-4f9d-9da2-30a613c6b20a" (UID: "29a63cf6-5c16-4f9d-9da2-30a613c6b20a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.289429 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29a63cf6-5c16-4f9d-9da2-30a613c6b20a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.456656 4955 generic.go:334] "Generic (PLEG): container finished" podID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerID="f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0" exitCode=0 Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.456709 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerDied","Data":"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0"} Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.456751 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4fkz" event={"ID":"29a63cf6-5c16-4f9d-9da2-30a613c6b20a","Type":"ContainerDied","Data":"a88457ba944b6ecf8fd2c6b95828bd257738e2c8a9e398892d77e3b0beb43ae6"} Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.456758 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4fkz" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.456775 4955 scope.go:117] "RemoveContainer" containerID="f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.482129 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.485379 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4fkz"] Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.616560 4955 scope.go:117] "RemoveContainer" containerID="8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.688895 4955 scope.go:117] "RemoveContainer" containerID="aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.711089 4955 scope.go:117] "RemoveContainer" containerID="f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.713065 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" path="/var/lib/kubelet/pods/29a63cf6-5c16-4f9d-9da2-30a613c6b20a/volumes" Nov 28 06:24:31 crc kubenswrapper[4955]: E1128 06:24:31.714709 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0\": container with ID starting with f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0 not found: ID does not exist" containerID="f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.714754 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0"} err="failed to get container status \"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0\": rpc error: code = NotFound desc = could not find container \"f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0\": container with ID starting with f897ca430f15a66321a6a9598138d37e29c1c844b1c4153a4e9c919599e9fdf0 not found: ID does not exist" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.714780 4955 scope.go:117] "RemoveContainer" containerID="8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64" Nov 28 06:24:31 crc kubenswrapper[4955]: E1128 06:24:31.716345 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64\": container with ID starting with 8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64 not found: ID does not exist" containerID="8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.716373 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64"} err="failed to get container status \"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64\": rpc error: code = NotFound desc = could not find container \"8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64\": container with ID starting with 8790bea4dc011a07dfc9f272c6ced87b7f35869703fb111ed4e8bc38f4328f64 not found: ID does not exist" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.716386 4955 scope.go:117] "RemoveContainer" containerID="aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57" Nov 28 06:24:31 crc kubenswrapper[4955]: E1128 06:24:31.716859 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57\": container with ID starting with aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57 not found: ID does not exist" containerID="aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57" Nov 28 06:24:31 crc kubenswrapper[4955]: I1128 06:24:31.716986 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57"} err="failed to get container status \"aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57\": rpc error: code = NotFound desc = could not find container \"aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57\": container with ID starting with aa1f4830b8b4f0d2e29551441e79748781cf1aad946c8084a60731c8d1cdde57 not found: ID does not exist" Nov 28 06:24:32 crc kubenswrapper[4955]: I1128 06:24:32.473193 4955 generic.go:334] "Generic (PLEG): container finished" podID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerID="292ef1e0687f331391b1ee6bbae689ea5a3d34ac39f9198183a0e094afff6101" exitCode=0 Nov 28 06:24:32 crc kubenswrapper[4955]: I1128 06:24:32.473289 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerDied","Data":"292ef1e0687f331391b1ee6bbae689ea5a3d34ac39f9198183a0e094afff6101"} Nov 28 06:24:32 crc kubenswrapper[4955]: I1128 06:24:32.475724 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerID="17ef45a1ad7eac34366d2f4a6aa9e5f8a8965ec71120f847bb4f25501d080400" exitCode=0 Nov 28 06:24:32 crc kubenswrapper[4955]: I1128 06:24:32.475798 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerDied","Data":"17ef45a1ad7eac34366d2f4a6aa9e5f8a8965ec71120f847bb4f25501d080400"} Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.484049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerStarted","Data":"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343"} Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.486840 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerStarted","Data":"d8a2b04653cc692c9d2010af704f26bd1830329dacce2f0614c994997660fb9f"} Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.488533 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerStarted","Data":"b33ee73d01e2f00467e9eea8b85d9f5a19600e6c6e65d38ffbb71511f53b7235"} Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.489956 4955 generic.go:334] "Generic (PLEG): container finished" podID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerID="36dd1337ce0773d11b4e602d163624af3a7802d0e809486a068c52e83874e8e3" exitCode=0 Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.489999 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerDied","Data":"36dd1337ce0773d11b4e602d163624af3a7802d0e809486a068c52e83874e8e3"} Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.531560 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jhnw8" podStartSLOduration=2.665321717 podStartE2EDuration="46.531539081s" podCreationTimestamp="2025-11-28 06:23:47 +0000 UTC" firstStartedPulling="2025-11-28 06:23:49.093890988 +0000 UTC m=+151.683146558" lastFinishedPulling="2025-11-28 06:24:32.960108352 +0000 UTC m=+195.549363922" observedRunningTime="2025-11-28 06:24:33.528731742 +0000 UTC m=+196.117987352" watchObservedRunningTime="2025-11-28 06:24:33.531539081 +0000 UTC m=+196.120794651" Nov 28 06:24:33 crc kubenswrapper[4955]: I1128 06:24:33.553048 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4rnck" podStartSLOduration=2.630136882 podStartE2EDuration="47.553029622s" podCreationTimestamp="2025-11-28 06:23:46 +0000 UTC" firstStartedPulling="2025-11-28 06:23:48.083208328 +0000 UTC m=+150.672463898" lastFinishedPulling="2025-11-28 06:24:33.006101068 +0000 UTC m=+195.595356638" observedRunningTime="2025-11-28 06:24:33.549872173 +0000 UTC m=+196.139127753" watchObservedRunningTime="2025-11-28 06:24:33.553029622 +0000 UTC m=+196.142285192" Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.512655 4955 generic.go:334] "Generic (PLEG): container finished" podID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerID="cc9d1f0fc09c24dafc1ea3ca8c2331ae7760c6fe73d6b3a43e1d6b4f46bfe8b0" exitCode=0 Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.512729 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerDied","Data":"cc9d1f0fc09c24dafc1ea3ca8c2331ae7760c6fe73d6b3a43e1d6b4f46bfe8b0"} Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.517392 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerStarted","Data":"b9f7861d69caae0cd19549bf8dcf796e5326ad61693db193aed61455ed379a47"} Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.520148 4955 generic.go:334] "Generic (PLEG): container finished" podID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerID="adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343" exitCode=0 Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.520208 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerDied","Data":"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343"} Nov 28 06:24:34 crc kubenswrapper[4955]: I1128 06:24:34.568552 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-svtkn" podStartSLOduration=3.452693036 podStartE2EDuration="51.568536118s" podCreationTimestamp="2025-11-28 06:23:43 +0000 UTC" firstStartedPulling="2025-11-28 06:23:46.01083921 +0000 UTC m=+148.600094780" lastFinishedPulling="2025-11-28 06:24:34.126682292 +0000 UTC m=+196.715937862" observedRunningTime="2025-11-28 06:24:34.565893764 +0000 UTC m=+197.155149334" watchObservedRunningTime="2025-11-28 06:24:34.568536118 +0000 UTC m=+197.157791688" Nov 28 06:24:35 crc kubenswrapper[4955]: I1128 06:24:35.528986 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerStarted","Data":"a975cd478a752818bf083b5c1c8272006da202369920b6c5a16b85feca58ed0c"} Nov 28 06:24:35 crc kubenswrapper[4955]: I1128 06:24:35.531483 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerStarted","Data":"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1"} Nov 28 06:24:35 crc kubenswrapper[4955]: I1128 06:24:35.565093 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpjkz" podStartSLOduration=2.647569983 podStartE2EDuration="51.565074452s" podCreationTimestamp="2025-11-28 06:23:44 +0000 UTC" firstStartedPulling="2025-11-28 06:23:46.015746367 +0000 UTC m=+148.605001927" lastFinishedPulling="2025-11-28 06:24:34.933250826 +0000 UTC m=+197.522506396" observedRunningTime="2025-11-28 06:24:35.563342954 +0000 UTC m=+198.152598544" watchObservedRunningTime="2025-11-28 06:24:35.565074452 +0000 UTC m=+198.154330022" Nov 28 06:24:35 crc kubenswrapper[4955]: I1128 06:24:35.586444 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hrjcd" podStartSLOduration=2.612913575 podStartE2EDuration="51.586428139s" podCreationTimestamp="2025-11-28 06:23:44 +0000 UTC" firstStartedPulling="2025-11-28 06:23:45.998871986 +0000 UTC m=+148.588127556" lastFinishedPulling="2025-11-28 06:24:34.97238655 +0000 UTC m=+197.561642120" observedRunningTime="2025-11-28 06:24:35.585867393 +0000 UTC m=+198.175122963" watchObservedRunningTime="2025-11-28 06:24:35.586428139 +0000 UTC m=+198.175683709" Nov 28 06:24:36 crc kubenswrapper[4955]: I1128 06:24:36.586368 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:36 crc kubenswrapper[4955]: I1128 06:24:36.586767 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:36 crc kubenswrapper[4955]: I1128 06:24:36.629421 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:37 crc kubenswrapper[4955]: I1128 06:24:37.360970 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:24:37 crc kubenswrapper[4955]: I1128 06:24:37.361028 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:24:38 crc kubenswrapper[4955]: I1128 06:24:38.399815 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jhnw8" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="registry-server" probeResult="failure" output=< Nov 28 06:24:38 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 06:24:38 crc kubenswrapper[4955]: > Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.170258 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.171691 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.216375 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.384112 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.384159 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.425497 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.583007 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.583050 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.649296 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.656264 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:24:44 crc kubenswrapper[4955]: I1128 06:24:44.672092 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:24:45 crc kubenswrapper[4955]: I1128 06:24:45.672483 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:46 crc kubenswrapper[4955]: I1128 06:24:46.656573 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:47 crc kubenswrapper[4955]: I1128 06:24:47.048193 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:24:47 crc kubenswrapper[4955]: I1128 06:24:47.429492 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:24:47 crc kubenswrapper[4955]: I1128 06:24:47.487779 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:24:47 crc kubenswrapper[4955]: I1128 06:24:47.604734 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpjkz" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="registry-server" containerID="cri-o://a975cd478a752818bf083b5c1c8272006da202369920b6c5a16b85feca58ed0c" gracePeriod=2 Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.446125 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.447022 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4rnck" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="registry-server" containerID="cri-o://b33ee73d01e2f00467e9eea8b85d9f5a19600e6c6e65d38ffbb71511f53b7235" gracePeriod=2 Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.627150 4955 generic.go:334] "Generic (PLEG): container finished" podID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerID="b33ee73d01e2f00467e9eea8b85d9f5a19600e6c6e65d38ffbb71511f53b7235" exitCode=0 Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.627186 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerDied","Data":"b33ee73d01e2f00467e9eea8b85d9f5a19600e6c6e65d38ffbb71511f53b7235"} Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.637538 4955 generic.go:334] "Generic (PLEG): container finished" podID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerID="a975cd478a752818bf083b5c1c8272006da202369920b6c5a16b85feca58ed0c" exitCode=0 Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.637620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerDied","Data":"a975cd478a752818bf083b5c1c8272006da202369920b6c5a16b85feca58ed0c"} Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.906587 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.911284 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949255 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content\") pod \"78355f71-961d-418e-a9d8-5332eb5c0ab1\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949318 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw448\" (UniqueName: \"kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448\") pod \"78355f71-961d-418e-a9d8-5332eb5c0ab1\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949396 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities\") pod \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949431 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content\") pod \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949473 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities\") pod \"78355f71-961d-418e-a9d8-5332eb5c0ab1\" (UID: \"78355f71-961d-418e-a9d8-5332eb5c0ab1\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.949529 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s9qf\" (UniqueName: \"kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf\") pod \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\" (UID: \"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6\") " Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.950564 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities" (OuterVolumeSpecName: "utilities") pod "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" (UID: "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.951932 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities" (OuterVolumeSpecName: "utilities") pod "78355f71-961d-418e-a9d8-5332eb5c0ab1" (UID: "78355f71-961d-418e-a9d8-5332eb5c0ab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.960765 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448" (OuterVolumeSpecName: "kube-api-access-zw448") pod "78355f71-961d-418e-a9d8-5332eb5c0ab1" (UID: "78355f71-961d-418e-a9d8-5332eb5c0ab1"). InnerVolumeSpecName "kube-api-access-zw448". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.961000 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf" (OuterVolumeSpecName: "kube-api-access-7s9qf") pod "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" (UID: "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6"). InnerVolumeSpecName "kube-api-access-7s9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:49 crc kubenswrapper[4955]: I1128 06:24:49.989148 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" (UID: "d982a2fd-ea0e-45cb-8a06-d6f08855b5f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.019863 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78355f71-961d-418e-a9d8-5332eb5c0ab1" (UID: "78355f71-961d-418e-a9d8-5332eb5c0ab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.050727 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.050957 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.051042 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.051157 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s9qf\" (UniqueName: \"kubernetes.io/projected/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6-kube-api-access-7s9qf\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.051238 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78355f71-961d-418e-a9d8-5332eb5c0ab1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.051315 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw448\" (UniqueName: \"kubernetes.io/projected/78355f71-961d-418e-a9d8-5332eb5c0ab1-kube-api-access-zw448\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.650435 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4rnck" event={"ID":"d982a2fd-ea0e-45cb-8a06-d6f08855b5f6","Type":"ContainerDied","Data":"1da65619e7ddfef8ae40e821e784a18695e8d6f9cefb760df5afd32e13b518ab"} Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.650580 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4rnck" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.650610 4955 scope.go:117] "RemoveContainer" containerID="b33ee73d01e2f00467e9eea8b85d9f5a19600e6c6e65d38ffbb71511f53b7235" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.658330 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpjkz" event={"ID":"78355f71-961d-418e-a9d8-5332eb5c0ab1","Type":"ContainerDied","Data":"9c410f987bad993e9a62983af28ae2bb5570b778ca1c5c5f0ddd34fce7ad0d9a"} Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.658471 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpjkz" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.679077 4955 scope.go:117] "RemoveContainer" containerID="292ef1e0687f331391b1ee6bbae689ea5a3d34ac39f9198183a0e094afff6101" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.710100 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.721847 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4rnck"] Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.730437 4955 scope.go:117] "RemoveContainer" containerID="a191cc736ebbc9adff9cac3ba3b12a851d7b06d020dad2141a5df8670f44777b" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.733096 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.739651 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpjkz"] Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.749465 4955 scope.go:117] "RemoveContainer" containerID="a975cd478a752818bf083b5c1c8272006da202369920b6c5a16b85feca58ed0c" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.769958 4955 scope.go:117] "RemoveContainer" containerID="cc9d1f0fc09c24dafc1ea3ca8c2331ae7760c6fe73d6b3a43e1d6b4f46bfe8b0" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.789938 4955 scope.go:117] "RemoveContainer" containerID="6574f1f17c3ac5814dfcb3cc443fd99ab34980597879a2ad915d413270b51fa8" Nov 28 06:24:50 crc kubenswrapper[4955]: I1128 06:24:50.940398 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" podUID="69319eb6-a378-4a28-a980-282c075c1c78" containerName="oauth-openshift" containerID="cri-o://3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed" gracePeriod=15 Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.321216 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475165 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qt8g\" (UniqueName: \"kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475262 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475345 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475403 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475473 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475590 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475698 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475750 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475796 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475859 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475908 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.475954 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.476003 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.476061 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error\") pod \"69319eb6-a378-4a28-a980-282c075c1c78\" (UID: \"69319eb6-a378-4a28-a980-282c075c1c78\") " Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.492366 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.492958 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.494060 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.495643 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.496365 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.497424 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.531453 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g" (OuterVolumeSpecName: "kube-api-access-6qt8g") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "kube-api-access-6qt8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.531990 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.535985 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.536970 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.537189 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.539238 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.539968 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.545919 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "69319eb6-a378-4a28-a980-282c075c1c78" (UID: "69319eb6-a378-4a28-a980-282c075c1c78"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577338 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577366 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577437 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577447 4955 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69319eb6-a378-4a28-a980-282c075c1c78-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577457 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577466 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qt8g\" (UniqueName: \"kubernetes.io/projected/69319eb6-a378-4a28-a980-282c075c1c78-kube-api-access-6qt8g\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577475 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577485 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577495 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577516 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577524 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577533 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577542 4955 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.577551 4955 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/69319eb6-a378-4a28-a980-282c075c1c78-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.666003 4955 generic.go:334] "Generic (PLEG): container finished" podID="69319eb6-a378-4a28-a980-282c075c1c78" containerID="3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed" exitCode=0 Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.666042 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.666049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" event={"ID":"69319eb6-a378-4a28-a980-282c075c1c78","Type":"ContainerDied","Data":"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed"} Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.666073 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-z7ncs" event={"ID":"69319eb6-a378-4a28-a980-282c075c1c78","Type":"ContainerDied","Data":"38c79c74f0744aa9e5023ff7d3740878ab8b9cbe2ebc3ed49d9fa66937a5cd54"} Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.666108 4955 scope.go:117] "RemoveContainer" containerID="3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.681492 4955 scope.go:117] "RemoveContainer" containerID="3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed" Nov 28 06:24:51 crc kubenswrapper[4955]: E1128 06:24:51.681852 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed\": container with ID starting with 3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed not found: ID does not exist" containerID="3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.681884 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed"} err="failed to get container status \"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed\": rpc error: code = NotFound desc = could not find container \"3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed\": container with ID starting with 3c3dbd4a7dde562a3b57f037593dcee490277ad9ae91c9c896274c0b64c629ed not found: ID does not exist" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.690213 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.694184 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-z7ncs"] Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.710012 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69319eb6-a378-4a28-a980-282c075c1c78" path="/var/lib/kubelet/pods/69319eb6-a378-4a28-a980-282c075c1c78/volumes" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.710676 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" path="/var/lib/kubelet/pods/78355f71-961d-418e-a9d8-5332eb5c0ab1/volumes" Nov 28 06:24:51 crc kubenswrapper[4955]: I1128 06:24:51.711279 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" path="/var/lib/kubelet/pods/d982a2fd-ea0e-45cb-8a06-d6f08855b5f6/volumes" Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.393199 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.394373 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.394604 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.395573 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.395838 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f" gracePeriod=600 Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.687430 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f" exitCode=0 Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.687533 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f"} Nov 28 06:24:53 crc kubenswrapper[4955]: I1128 06:24:53.687858 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4"} Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.533841 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-676ffb7f95-6tgcl"] Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.534872 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.534894 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.534917 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.534930 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.534949 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.534964 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.534979 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.534991 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535004 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535017 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535040 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535052 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535067 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535079 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535239 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535251 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="extract-content" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535270 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535282 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535303 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535315 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535333 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69319eb6-a378-4a28-a980-282c075c1c78" containerName="oauth-openshift" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535345 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="69319eb6-a378-4a28-a980-282c075c1c78" containerName="oauth-openshift" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535362 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535375 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: E1128 06:24:58.535395 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535407 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="extract-utilities" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535592 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="78355f71-961d-418e-a9d8-5332eb5c0ab1" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535625 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d982a2fd-ea0e-45cb-8a06-d6f08855b5f6" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535642 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a63cf6-5c16-4f9d-9da2-30a613c6b20a" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535658 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc2de0d3-1a18-4cdc-9377-17bac629998c" containerName="registry-server" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.535672 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="69319eb6-a378-4a28-a980-282c075c1c78" containerName="oauth-openshift" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.536281 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.539273 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.542905 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.543408 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.543683 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.543939 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.544142 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.545452 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.545692 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.545978 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.546028 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.546851 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.547352 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.560683 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-676ffb7f95-6tgcl"] Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.563949 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.569432 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.569816 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.674465 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.674565 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.674991 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-session\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675092 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-policies\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675120 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-router-certs\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675142 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675230 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675270 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-dir\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675312 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59f9\" (UniqueName: \"kubernetes.io/projected/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-kube-api-access-w59f9\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675346 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-login\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675450 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675532 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-service-ca\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.675577 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-error\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.776829 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.776917 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777120 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-session\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777459 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-policies\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-router-certs\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.777910 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778056 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-dir\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778376 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-login\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778393 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-dir\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778449 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w59f9\" (UniqueName: \"kubernetes.io/projected/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-kube-api-access-w59f9\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778594 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778673 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-service-ca\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.778727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-error\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.781319 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-cliconfig\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.783177 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-audit-policies\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.784220 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-service-ca\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.785092 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.787857 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.788130 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-serving-cert\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.788297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-login\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.788915 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-router-certs\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.793389 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-session\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.796912 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-template-error\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.798554 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.799056 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.808446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w59f9\" (UniqueName: \"kubernetes.io/projected/c8fb536e-6b4b-48cd-a0ef-d88fa76db913-kube-api-access-w59f9\") pod \"oauth-openshift-676ffb7f95-6tgcl\" (UID: \"c8fb536e-6b4b-48cd-a0ef-d88fa76db913\") " pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:58 crc kubenswrapper[4955]: I1128 06:24:58.902712 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.156341 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-676ffb7f95-6tgcl"] Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.719451 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" event={"ID":"c8fb536e-6b4b-48cd-a0ef-d88fa76db913","Type":"ContainerStarted","Data":"b6fa225426bb08bfd791a4a625073c0fd97bc04107d541a9aa84cdd52f7a79db"} Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.720008 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.720024 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" event={"ID":"c8fb536e-6b4b-48cd-a0ef-d88fa76db913","Type":"ContainerStarted","Data":"1ccfdc3079886fc0624504c5863b5893f3312c9093bfe977f026ee7c027dfe83"} Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.749204 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" podStartSLOduration=34.749184025 podStartE2EDuration="34.749184025s" podCreationTimestamp="2025-11-28 06:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:24:59.747208776 +0000 UTC m=+222.336464426" watchObservedRunningTime="2025-11-28 06:24:59.749184025 +0000 UTC m=+222.338439595" Nov 28 06:24:59 crc kubenswrapper[4955]: I1128 06:24:59.871014 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-676ffb7f95-6tgcl" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.113916 4955 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.114492 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386" gracePeriod=15 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.114665 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac" gracePeriod=15 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.114715 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf" gracePeriod=15 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.114757 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea" gracePeriod=15 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.114814 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde" gracePeriod=15 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.118750 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119015 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119036 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119054 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119063 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119075 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119084 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119102 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119111 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119125 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119132 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119146 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119154 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119266 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119283 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119293 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119303 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119312 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119321 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 06:25:06 crc kubenswrapper[4955]: E1128 06:25:06.119450 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.119461 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.122752 4955 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.123294 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.134371 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188032 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188094 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188119 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188155 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188240 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188303 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.188321 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.289899 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.289945 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.289967 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290006 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290073 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290072 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290039 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290115 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290163 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290260 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.290293 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.758456 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.759649 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.760393 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac" exitCode=0 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.760421 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf" exitCode=0 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.760432 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea" exitCode=0 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.760443 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde" exitCode=2 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.760497 4955 scope.go:117] "RemoveContainer" containerID="f0d3bc3d028df49665a78aecdfa08650b680d9b826142c8a80622a70129fba5f" Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.762566 4955 generic.go:334] "Generic (PLEG): container finished" podID="76e628a5-2b40-40b9-a4d1-29ff689b1096" containerID="07c03cbbfa8d83ebbe79d6bbd2c520ec768548c4e4ad4b934d90a9df5367c9e2" exitCode=0 Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.762603 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"76e628a5-2b40-40b9-a4d1-29ff689b1096","Type":"ContainerDied","Data":"07c03cbbfa8d83ebbe79d6bbd2c520ec768548c4e4ad4b934d90a9df5367c9e2"} Nov 28 06:25:06 crc kubenswrapper[4955]: I1128 06:25:06.763442 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:07 crc kubenswrapper[4955]: I1128 06:25:07.709721 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:07 crc kubenswrapper[4955]: I1128 06:25:07.785623 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.006481 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.006960 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.110600 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock\") pod \"76e628a5-2b40-40b9-a4d1-29ff689b1096\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.110661 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir\") pod \"76e628a5-2b40-40b9-a4d1-29ff689b1096\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.110745 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access\") pod \"76e628a5-2b40-40b9-a4d1-29ff689b1096\" (UID: \"76e628a5-2b40-40b9-a4d1-29ff689b1096\") " Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.110781 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock" (OuterVolumeSpecName: "var-lock") pod "76e628a5-2b40-40b9-a4d1-29ff689b1096" (UID: "76e628a5-2b40-40b9-a4d1-29ff689b1096"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.110854 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76e628a5-2b40-40b9-a4d1-29ff689b1096" (UID: "76e628a5-2b40-40b9-a4d1-29ff689b1096"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.111106 4955 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.111124 4955 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76e628a5-2b40-40b9-a4d1-29ff689b1096-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.116276 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76e628a5-2b40-40b9-a4d1-29ff689b1096" (UID: "76e628a5-2b40-40b9-a4d1-29ff689b1096"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.212437 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e628a5-2b40-40b9-a4d1-29ff689b1096-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.797006 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.798069 4955 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386" exitCode=0 Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.800251 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"76e628a5-2b40-40b9-a4d1-29ff689b1096","Type":"ContainerDied","Data":"30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05"} Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.800317 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b231a12d92d5f5968e06012ee9f2515f5227a83bc0ca60b163d7da0e72fa05" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.800335 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 06:25:08 crc kubenswrapper[4955]: I1128 06:25:08.822119 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.005242 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.006209 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.006765 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.007130 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026001 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026111 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026135 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026156 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026243 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026398 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026746 4955 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026776 4955 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.026794 4955 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.710658 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.807142 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.807817 4955 scope.go:117] "RemoveContainer" containerID="41aa274dcd88975971ae6aed386207acf90b298690b9d0924bb525644ac99dac" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.807921 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.808614 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.808778 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.810527 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.810680 4955 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.830723 4955 scope.go:117] "RemoveContainer" containerID="f31c982a586dbf6f66d52648db3c064bf12cd29fed8b92af15dae45f0443deaf" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.844906 4955 scope.go:117] "RemoveContainer" containerID="5e9fa5437c61940812541ed02e8f4aa27663e2ea3e04035b731a055efb179bea" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.860167 4955 scope.go:117] "RemoveContainer" containerID="f04e601aa2f70fe2dba7530f420e5c6a64f6bad135b5ba12dfbf8eac1e589fde" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.873589 4955 scope.go:117] "RemoveContainer" containerID="6817499d214fdd6de7a17793cd4a03f1ecf865f8e76557da641f5e5a6cc8b386" Nov 28 06:25:09 crc kubenswrapper[4955]: I1128 06:25:09.887290 4955 scope.go:117] "RemoveContainer" containerID="98d86f8e11f672884676790af11c660d9c3925e5721cad1a53bb49dc2d88fddc" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.169725 4955 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:11 crc kubenswrapper[4955]: I1128 06:25:11.170595 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:11 crc kubenswrapper[4955]: W1128 06:25:11.208776 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-a1f7edee7323c55898c294eef30b1be564ff6a157c199255163078c569bb5b1b WatchSource:0}: Error finding container a1f7edee7323c55898c294eef30b1be564ff6a157c199255163078c569bb5b1b: Status 404 returned error can't find the container with id a1f7edee7323c55898c294eef30b1be564ff6a157c199255163078c569bb5b1b Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.212139 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c179ba0f4c849 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:25:11.210158153 +0000 UTC m=+233.799413723,LastTimestamp:2025-11-28 06:25:11.210158153 +0000 UTC m=+233.799413723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.216794 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.217037 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.217316 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.217562 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.218685 4955 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: I1128 06:25:11.218726 4955 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.218931 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="200ms" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.422218 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="400ms" Nov 28 06:25:11 crc kubenswrapper[4955]: I1128 06:25:11.820579 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2"} Nov 28 06:25:11 crc kubenswrapper[4955]: I1128 06:25:11.820921 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a1f7edee7323c55898c294eef30b1be564ff6a157c199255163078c569bb5b1b"} Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.821734 4955 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:11 crc kubenswrapper[4955]: I1128 06:25:11.822049 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:11 crc kubenswrapper[4955]: E1128 06:25:11.823254 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="800ms" Nov 28 06:25:12 crc kubenswrapper[4955]: E1128 06:25:12.624175 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="1.6s" Nov 28 06:25:14 crc kubenswrapper[4955]: E1128 06:25:14.225663 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="3.2s" Nov 28 06:25:17 crc kubenswrapper[4955]: E1128 06:25:17.426327 4955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.97:6443: connect: connection refused" interval="6.4s" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.703890 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.708941 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.709550 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.736986 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.737029 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:17 crc kubenswrapper[4955]: E1128 06:25:17.737608 4955 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.738258 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:17 crc kubenswrapper[4955]: W1128 06:25:17.786438 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-49158e167b98b303c56b54b58a4e0986a5b120180c5f5a7d4b42d697c9126095 WatchSource:0}: Error finding container 49158e167b98b303c56b54b58a4e0986a5b120180c5f5a7d4b42d697c9126095: Status 404 returned error can't find the container with id 49158e167b98b303c56b54b58a4e0986a5b120180c5f5a7d4b42d697c9126095 Nov 28 06:25:17 crc kubenswrapper[4955]: I1128 06:25:17.869695 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49158e167b98b303c56b54b58a4e0986a5b120180c5f5a7d4b42d697c9126095"} Nov 28 06:25:18 crc kubenswrapper[4955]: E1128 06:25:18.781589 4955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.97:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187c179ba0f4c849 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 06:25:11.210158153 +0000 UTC m=+233.799413723,LastTimestamp:2025-11-28 06:25:11.210158153 +0000 UTC m=+233.799413723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 06:25:18 crc kubenswrapper[4955]: I1128 06:25:18.878385 4955 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9a1348fc18eac5ccadf6647716a9a1988afb2b5fc35415c8a7a3c071a6a96d5b" exitCode=0 Nov 28 06:25:18 crc kubenswrapper[4955]: I1128 06:25:18.878442 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9a1348fc18eac5ccadf6647716a9a1988afb2b5fc35415c8a7a3c071a6a96d5b"} Nov 28 06:25:18 crc kubenswrapper[4955]: I1128 06:25:18.878788 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:18 crc kubenswrapper[4955]: I1128 06:25:18.878809 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:18 crc kubenswrapper[4955]: E1128 06:25:18.879240 4955 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:18 crc kubenswrapper[4955]: I1128 06:25:18.879723 4955 status_manager.go:851] "Failed to get status for pod" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.97:6443: connect: connection refused" Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.888583 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e1ca42ad1a101ef401148ddc49e759ed31bd13c1b9681f6181dac1e769d0acf1"} Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.888865 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"91f308349c23dbce80eedd6ba4a52554ffd7efc195333cc61aca663471706641"} Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.888876 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3fda1bc3f6c604a59c1f9efab997a6c7e36e2817c24043bb537c308d4a80ad84"} Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.891716 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.891768 4955 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60" exitCode=1 Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.891789 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60"} Nov 28 06:25:19 crc kubenswrapper[4955]: I1128 06:25:19.892271 4955 scope.go:117] "RemoveContainer" containerID="29ff695ef91040b96b6a3baa84ffc1b46702ccff50f6ae4e030b230b5c392a60" Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.907698 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.907743 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.907620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e37925827bd5c6bef6c988d67f4c8d5780f39c427d44c2cd907b1a6ab2fe4f0"} Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.908022 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"96a7cf3e4783cc943d0bf464a832f4683091e275632406127d4423ed19c1a560"} Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.908060 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.915276 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 06:25:20 crc kubenswrapper[4955]: I1128 06:25:20.915336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"53fecbb236ce76535559625dcccb9cd5f16015b667da11d98a1c1861b97f2a4c"} Nov 28 06:25:22 crc kubenswrapper[4955]: I1128 06:25:22.741176 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:22 crc kubenswrapper[4955]: I1128 06:25:22.741496 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:22 crc kubenswrapper[4955]: I1128 06:25:22.748535 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:24 crc kubenswrapper[4955]: I1128 06:25:24.820238 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.909846 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.916546 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.919347 4955 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.946832 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.946889 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:25 crc kubenswrapper[4955]: I1128 06:25:25.953771 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:26 crc kubenswrapper[4955]: I1128 06:25:26.952316 4955 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:26 crc kubenswrapper[4955]: I1128 06:25:26.952360 4955 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c415150e-85c8-4880-805e-0bb4a4219df6" Nov 28 06:25:27 crc kubenswrapper[4955]: I1128 06:25:27.722237 4955 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4cce49ad-5e79-4ad2-865a-bc33ee0725c1" Nov 28 06:25:34 crc kubenswrapper[4955]: I1128 06:25:34.825594 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 06:25:35 crc kubenswrapper[4955]: I1128 06:25:35.138248 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 06:25:35 crc kubenswrapper[4955]: I1128 06:25:35.987134 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 06:25:36 crc kubenswrapper[4955]: I1128 06:25:36.111727 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 06:25:36 crc kubenswrapper[4955]: I1128 06:25:36.608408 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.266425 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.531342 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.733941 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.749607 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.849998 4955 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.859681 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.859734 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.866009 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.889268 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.889247595 podStartE2EDuration="12.889247595s" podCreationTimestamp="2025-11-28 06:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:25:37.888268465 +0000 UTC m=+260.477524125" watchObservedRunningTime="2025-11-28 06:25:37.889247595 +0000 UTC m=+260.478503175" Nov 28 06:25:37 crc kubenswrapper[4955]: I1128 06:25:37.975324 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.036022 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.089668 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.404472 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.427438 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.464075 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.677261 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.801077 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 06:25:38 crc kubenswrapper[4955]: I1128 06:25:38.966318 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.033603 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.241257 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.294078 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.396698 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.402041 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.477354 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.811812 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.939079 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 06:25:39 crc kubenswrapper[4955]: I1128 06:25:39.948607 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.066255 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.091362 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.120566 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.247274 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.278611 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.347733 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.568151 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.685049 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.794281 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.811371 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.813403 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 06:25:40 crc kubenswrapper[4955]: I1128 06:25:40.995809 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.000833 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.157731 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.170140 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.221280 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.287312 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.293146 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.353743 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.369490 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.416198 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.422292 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.503743 4955 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.581639 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.584063 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.600601 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.655459 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.702742 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.733684 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.771372 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.781397 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.787998 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.825680 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.850540 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.865985 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.980394 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 06:25:41 crc kubenswrapper[4955]: I1128 06:25:41.991941 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.054754 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.106371 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.107679 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.108477 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.359247 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.500565 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.512958 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.637013 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.653571 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.697121 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.736544 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.744483 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.812165 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.879674 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 06:25:42 crc kubenswrapper[4955]: I1128 06:25:42.983459 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.021776 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.063644 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.092607 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.103466 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.289952 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.294329 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.468145 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.522265 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.565984 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.586745 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.683461 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.767452 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.785461 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.790438 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.850782 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.919966 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 06:25:43 crc kubenswrapper[4955]: I1128 06:25:43.975792 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.250182 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.340974 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.379199 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.381637 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.439811 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.474669 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.502861 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.542985 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.568520 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.656702 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.687671 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.713053 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.737633 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.747947 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.775666 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.787708 4955 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.867811 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.871965 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.928270 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.946691 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.953398 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 06:25:44 crc kubenswrapper[4955]: I1128 06:25:44.998599 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.026426 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.090469 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.107395 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.254176 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.306998 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.329533 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.330409 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.366153 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.401032 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.449739 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.538043 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.620963 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.697396 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.701213 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.757655 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.865197 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.944573 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.996234 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:25:45 crc kubenswrapper[4955]: I1128 06:25:45.997170 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.250983 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.285362 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.288608 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.296596 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.335628 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.367621 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.368997 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.523343 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.563631 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.628811 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.638753 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.683498 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.794085 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.812063 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.948474 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.957013 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 06:25:46 crc kubenswrapper[4955]: I1128 06:25:46.976576 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.102661 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.175865 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.177279 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.207977 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.319022 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.386588 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.390193 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.405238 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.409630 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.471144 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.482602 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.509038 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.510982 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.528542 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.559215 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.586890 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.590905 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.602636 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.664621 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.849623 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.853341 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.857163 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.904488 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.912886 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 06:25:47 crc kubenswrapper[4955]: I1128 06:25:47.970784 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.133593 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.202626 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.219741 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.221746 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.291672 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.329431 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.339636 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.371913 4955 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.372189 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2" gracePeriod=5 Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.426491 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.427672 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.436500 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.445223 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.490214 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.517257 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.571893 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.640868 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.825702 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.850785 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.865022 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.878102 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.947061 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 06:25:48 crc kubenswrapper[4955]: I1128 06:25:48.954459 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.019269 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.076612 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.088694 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.219554 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.255664 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.320003 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.361276 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.373615 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.492615 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.566895 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.571852 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.678818 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.680549 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.746349 4955 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.776842 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.926299 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.926641 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.944536 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 06:25:49 crc kubenswrapper[4955]: I1128 06:25:49.966729 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.057482 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.111929 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.207703 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.348116 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.419162 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.460188 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.526147 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.764594 4955 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.792023 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.875644 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 06:25:50 crc kubenswrapper[4955]: I1128 06:25:50.971851 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.067182 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.092685 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.219743 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.556064 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.610274 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.629382 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.669351 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.755588 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.805142 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.844617 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 06:25:51 crc kubenswrapper[4955]: I1128 06:25:51.930074 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:25:52 crc kubenswrapper[4955]: I1128 06:25:52.476307 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 06:25:52 crc kubenswrapper[4955]: I1128 06:25:52.576290 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 06:25:52 crc kubenswrapper[4955]: I1128 06:25:52.973054 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 06:25:52 crc kubenswrapper[4955]: I1128 06:25:52.995171 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.124739 4955 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.188465 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.309697 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.665654 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.971155 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:25:53 crc kubenswrapper[4955]: I1128 06:25:53.971247 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.033743 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.033820 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.033888 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.033953 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.033992 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.034924 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.034992 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.035054 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.035074 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.044073 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.122205 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.122568 4955 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2" exitCode=137 Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.122621 4955 scope.go:117] "RemoveContainer" containerID="58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.122693 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.135313 4955 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.135342 4955 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.135351 4955 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.135359 4955 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.135376 4955 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.141027 4955 scope.go:117] "RemoveContainer" containerID="58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2" Nov 28 06:25:54 crc kubenswrapper[4955]: E1128 06:25:54.141586 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2\": container with ID starting with 58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2 not found: ID does not exist" containerID="58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.141633 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2"} err="failed to get container status \"58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2\": rpc error: code = NotFound desc = could not find container \"58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2\": container with ID starting with 58fdd7ae71ca98208e6c46c44b35fb8b7439ad336abef71177ee2c6467394af2 not found: ID does not exist" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.304881 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 06:25:54 crc kubenswrapper[4955]: I1128 06:25:54.711562 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 06:25:55 crc kubenswrapper[4955]: I1128 06:25:55.267239 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 06:25:55 crc kubenswrapper[4955]: I1128 06:25:55.711168 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.279386 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.280389 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" podUID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" containerName="controller-manager" containerID="cri-o://dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e" gracePeriod=30 Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.282474 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.282689 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" podUID="e5a1023e-2f70-4592-b507-8a198260ed35" containerName="route-controller-manager" containerID="cri-o://cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5" gracePeriod=30 Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.647133 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.682645 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745678 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca\") pod \"e5a1023e-2f70-4592-b507-8a198260ed35\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745723 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmspc\" (UniqueName: \"kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc\") pod \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745748 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert\") pod \"e5a1023e-2f70-4592-b507-8a198260ed35\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745781 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config\") pod \"e5a1023e-2f70-4592-b507-8a198260ed35\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745797 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") pod \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745825 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config\") pod \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745841 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") pod \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745873 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles\") pod \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\" (UID: \"0a49aa7e-6973-4a7b-9b1d-71922376ee73\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.745896 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcgfm\" (UniqueName: \"kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm\") pod \"e5a1023e-2f70-4592-b507-8a198260ed35\" (UID: \"e5a1023e-2f70-4592-b507-8a198260ed35\") " Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.746991 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a49aa7e-6973-4a7b-9b1d-71922376ee73" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.747440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0a49aa7e-6973-4a7b-9b1d-71922376ee73" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.747623 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config" (OuterVolumeSpecName: "config") pod "e5a1023e-2f70-4592-b507-8a198260ed35" (UID: "e5a1023e-2f70-4592-b507-8a198260ed35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.748064 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5a1023e-2f70-4592-b507-8a198260ed35" (UID: "e5a1023e-2f70-4592-b507-8a198260ed35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.748262 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config" (OuterVolumeSpecName: "config") pod "0a49aa7e-6973-4a7b-9b1d-71922376ee73" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.752228 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm" (OuterVolumeSpecName: "kube-api-access-hcgfm") pod "e5a1023e-2f70-4592-b507-8a198260ed35" (UID: "e5a1023e-2f70-4592-b507-8a198260ed35"). InnerVolumeSpecName "kube-api-access-hcgfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.752296 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a49aa7e-6973-4a7b-9b1d-71922376ee73" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.754220 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc" (OuterVolumeSpecName: "kube-api-access-jmspc") pod "0a49aa7e-6973-4a7b-9b1d-71922376ee73" (UID: "0a49aa7e-6973-4a7b-9b1d-71922376ee73"). InnerVolumeSpecName "kube-api-access-jmspc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.754933 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5a1023e-2f70-4592-b507-8a198260ed35" (UID: "e5a1023e-2f70-4592-b507-8a198260ed35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847133 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmspc\" (UniqueName: \"kubernetes.io/projected/0a49aa7e-6973-4a7b-9b1d-71922376ee73-kube-api-access-jmspc\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847160 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5a1023e-2f70-4592-b507-8a198260ed35-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847169 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847179 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a49aa7e-6973-4a7b-9b1d-71922376ee73-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847187 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847195 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847203 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a49aa7e-6973-4a7b-9b1d-71922376ee73-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847211 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcgfm\" (UniqueName: \"kubernetes.io/projected/e5a1023e-2f70-4592-b507-8a198260ed35-kube-api-access-hcgfm\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:16 crc kubenswrapper[4955]: I1128 06:26:16.847219 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5a1023e-2f70-4592-b507-8a198260ed35-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.262782 4955 generic.go:334] "Generic (PLEG): container finished" podID="e5a1023e-2f70-4592-b507-8a198260ed35" containerID="cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5" exitCode=0 Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.262848 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" event={"ID":"e5a1023e-2f70-4592-b507-8a198260ed35","Type":"ContainerDied","Data":"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5"} Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.263573 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" event={"ID":"e5a1023e-2f70-4592-b507-8a198260ed35","Type":"ContainerDied","Data":"095c6ced1aa7ee1b776c974f47908421c2f2558af0141bd410e20383753aeef1"} Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.262878 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.263645 4955 scope.go:117] "RemoveContainer" containerID="cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.267947 4955 generic.go:334] "Generic (PLEG): container finished" podID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" containerID="dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e" exitCode=0 Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.268005 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" event={"ID":"0a49aa7e-6973-4a7b-9b1d-71922376ee73","Type":"ContainerDied","Data":"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e"} Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.268043 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" event={"ID":"0a49aa7e-6973-4a7b-9b1d-71922376ee73","Type":"ContainerDied","Data":"3237a811b8a34b9842544b4c8ca5100e959d16d4c858bdaac8ffe429ac9065d2"} Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.268117 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v94cg" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.285596 4955 scope.go:117] "RemoveContainer" containerID="cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5" Nov 28 06:26:17 crc kubenswrapper[4955]: E1128 06:26:17.286019 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5\": container with ID starting with cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5 not found: ID does not exist" containerID="cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.286056 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5"} err="failed to get container status \"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5\": rpc error: code = NotFound desc = could not find container \"cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5\": container with ID starting with cd7dea8a55f52e93e7d08fed14eec79cfa70288758eed3c1750f3b38a898e4d5 not found: ID does not exist" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.286080 4955 scope.go:117] "RemoveContainer" containerID="dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.306967 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.308311 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-v6cpm"] Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.313821 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.316685 4955 scope.go:117] "RemoveContainer" containerID="dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e" Nov 28 06:26:17 crc kubenswrapper[4955]: E1128 06:26:17.317125 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e\": container with ID starting with dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e not found: ID does not exist" containerID="dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.317166 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e"} err="failed to get container status \"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e\": rpc error: code = NotFound desc = could not find container \"dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e\": container with ID starting with dfa2818b458c5f168ddfa7640330befbe7e011e0bcfe2a44a80b129f7664378e not found: ID does not exist" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.317852 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v94cg"] Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.718644 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" path="/var/lib/kubelet/pods/0a49aa7e-6973-4a7b-9b1d-71922376ee73/volumes" Nov 28 06:26:17 crc kubenswrapper[4955]: I1128 06:26:17.720092 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a1023e-2f70-4592-b507-8a198260ed35" path="/var/lib/kubelet/pods/e5a1023e-2f70-4592-b507-8a198260ed35/volumes" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.109976 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:18 crc kubenswrapper[4955]: E1128 06:26:18.110268 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a1023e-2f70-4592-b507-8a198260ed35" containerName="route-controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110295 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a1023e-2f70-4592-b507-8a198260ed35" containerName="route-controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: E1128 06:26:18.110309 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" containerName="installer" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110318 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" containerName="installer" Nov 28 06:26:18 crc kubenswrapper[4955]: E1128 06:26:18.110335 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" containerName="controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110345 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" containerName="controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: E1128 06:26:18.110359 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110367 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110525 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e628a5-2b40-40b9-a4d1-29ff689b1096" containerName="installer" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110541 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a1023e-2f70-4592-b507-8a198260ed35" containerName="route-controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110556 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a49aa7e-6973-4a7b-9b1d-71922376ee73" containerName="controller-manager" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.110577 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.111165 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.113701 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.114362 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.114548 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.114891 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.116979 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.117166 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.120680 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.121793 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.130313 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.130698 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.130922 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.131241 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.131698 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.131836 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.132856 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.142756 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.145373 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.266996 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwdp\" (UniqueName: \"kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267064 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267426 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4flc\" (UniqueName: \"kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267469 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267513 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267594 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267660 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267705 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.267741 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368334 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4flc\" (UniqueName: \"kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368661 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368684 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368751 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368768 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368805 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwdp\" (UniqueName: \"kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.368822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.369825 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.370288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.370710 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.370981 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.371260 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.373171 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.375090 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.384851 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwdp\" (UniqueName: \"kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp\") pod \"controller-manager-7b695b4b44-tw7f6\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.388234 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4flc\" (UniqueName: \"kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc\") pod \"route-controller-manager-7cdb6944bd-m67rq\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.439179 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.456350 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.701013 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:18 crc kubenswrapper[4955]: W1128 06:26:18.710496 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aeb2d23_5b08_4d78_9a0d_07ed11951767.slice/crio-8caaa5091f0beb2585271603b7dfd6daf2b3e48b266979112464ad53e1d1217b WatchSource:0}: Error finding container 8caaa5091f0beb2585271603b7dfd6daf2b3e48b266979112464ad53e1d1217b: Status 404 returned error can't find the container with id 8caaa5091f0beb2585271603b7dfd6daf2b3e48b266979112464ad53e1d1217b Nov 28 06:26:18 crc kubenswrapper[4955]: I1128 06:26:18.851199 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:18 crc kubenswrapper[4955]: W1128 06:26:18.860148 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ee0fd8_cf89_42c9_9278_2e5444abad06.slice/crio-e4084f489a11d649adf35fe704b2f4ef76416f90e2ecf86c09fc27bf8b43f818 WatchSource:0}: Error finding container e4084f489a11d649adf35fe704b2f4ef76416f90e2ecf86c09fc27bf8b43f818: Status 404 returned error can't find the container with id e4084f489a11d649adf35fe704b2f4ef76416f90e2ecf86c09fc27bf8b43f818 Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.278924 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" event={"ID":"63ee0fd8-cf89-42c9-9278-2e5444abad06","Type":"ContainerStarted","Data":"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab"} Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.279219 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.279233 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" event={"ID":"63ee0fd8-cf89-42c9-9278-2e5444abad06","Type":"ContainerStarted","Data":"e4084f489a11d649adf35fe704b2f4ef76416f90e2ecf86c09fc27bf8b43f818"} Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.281450 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" event={"ID":"2aeb2d23-5b08-4d78-9a0d-07ed11951767","Type":"ContainerStarted","Data":"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e"} Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.281537 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" event={"ID":"2aeb2d23-5b08-4d78-9a0d-07ed11951767","Type":"ContainerStarted","Data":"8caaa5091f0beb2585271603b7dfd6daf2b3e48b266979112464ad53e1d1217b"} Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.282646 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.288056 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.300393 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" podStartSLOduration=3.300374248 podStartE2EDuration="3.300374248s" podCreationTimestamp="2025-11-28 06:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:19.298325578 +0000 UTC m=+301.887581178" watchObservedRunningTime="2025-11-28 06:26:19.300374248 +0000 UTC m=+301.889629828" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.317762 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" podStartSLOduration=3.317743649 podStartE2EDuration="3.317743649s" podCreationTimestamp="2025-11-28 06:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:19.314538365 +0000 UTC m=+301.903793945" watchObservedRunningTime="2025-11-28 06:26:19.317743649 +0000 UTC m=+301.906999209" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.390336 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.569480 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:19 crc kubenswrapper[4955]: I1128 06:26:19.581563 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.292331 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" podUID="63ee0fd8-cf89-42c9-9278-2e5444abad06" containerName="route-controller-manager" containerID="cri-o://2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab" gracePeriod=30 Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.293117 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" podUID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" containerName="controller-manager" containerID="cri-o://8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e" gracePeriod=30 Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.712660 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.721056 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.745663 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:21 crc kubenswrapper[4955]: E1128 06:26:21.745889 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" containerName="controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.745909 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" containerName="controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: E1128 06:26:21.745923 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ee0fd8-cf89-42c9-9278-2e5444abad06" containerName="route-controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.745931 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ee0fd8-cf89-42c9-9278-2e5444abad06" containerName="route-controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.746031 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ee0fd8-cf89-42c9-9278-2e5444abad06" containerName="route-controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.746049 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" containerName="controller-manager" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.746422 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.755449 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908001 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert\") pod \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908058 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert\") pod \"63ee0fd8-cf89-42c9-9278-2e5444abad06\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908091 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca\") pod \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908117 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca\") pod \"63ee0fd8-cf89-42c9-9278-2e5444abad06\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908141 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config\") pod \"63ee0fd8-cf89-42c9-9278-2e5444abad06\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908200 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config\") pod \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908265 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmwdp\" (UniqueName: \"kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp\") pod \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908303 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4flc\" (UniqueName: \"kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc\") pod \"63ee0fd8-cf89-42c9-9278-2e5444abad06\" (UID: \"63ee0fd8-cf89-42c9-9278-2e5444abad06\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908326 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles\") pod \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\" (UID: \"2aeb2d23-5b08-4d78-9a0d-07ed11951767\") " Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908468 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908559 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908582 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tccp4\" (UniqueName: \"kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908608 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.908639 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.909382 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config" (OuterVolumeSpecName: "config") pod "63ee0fd8-cf89-42c9-9278-2e5444abad06" (UID: "63ee0fd8-cf89-42c9-9278-2e5444abad06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.909445 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca" (OuterVolumeSpecName: "client-ca") pod "63ee0fd8-cf89-42c9-9278-2e5444abad06" (UID: "63ee0fd8-cf89-42c9-9278-2e5444abad06"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.909519 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca" (OuterVolumeSpecName: "client-ca") pod "2aeb2d23-5b08-4d78-9a0d-07ed11951767" (UID: "2aeb2d23-5b08-4d78-9a0d-07ed11951767"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.909628 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2aeb2d23-5b08-4d78-9a0d-07ed11951767" (UID: "2aeb2d23-5b08-4d78-9a0d-07ed11951767"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.909748 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config" (OuterVolumeSpecName: "config") pod "2aeb2d23-5b08-4d78-9a0d-07ed11951767" (UID: "2aeb2d23-5b08-4d78-9a0d-07ed11951767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.913413 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63ee0fd8-cf89-42c9-9278-2e5444abad06" (UID: "63ee0fd8-cf89-42c9-9278-2e5444abad06"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.913457 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc" (OuterVolumeSpecName: "kube-api-access-n4flc") pod "63ee0fd8-cf89-42c9-9278-2e5444abad06" (UID: "63ee0fd8-cf89-42c9-9278-2e5444abad06"). InnerVolumeSpecName "kube-api-access-n4flc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.913639 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp" (OuterVolumeSpecName: "kube-api-access-nmwdp") pod "2aeb2d23-5b08-4d78-9a0d-07ed11951767" (UID: "2aeb2d23-5b08-4d78-9a0d-07ed11951767"). InnerVolumeSpecName "kube-api-access-nmwdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:21 crc kubenswrapper[4955]: I1128 06:26:21.914114 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2aeb2d23-5b08-4d78-9a0d-07ed11951767" (UID: "2aeb2d23-5b08-4d78-9a0d-07ed11951767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009683 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009773 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tccp4\" (UniqueName: \"kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009799 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009825 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009861 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmwdp\" (UniqueName: \"kubernetes.io/projected/2aeb2d23-5b08-4d78-9a0d-07ed11951767-kube-api-access-nmwdp\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009873 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4flc\" (UniqueName: \"kubernetes.io/projected/63ee0fd8-cf89-42c9-9278-2e5444abad06-kube-api-access-n4flc\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009882 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009891 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2aeb2d23-5b08-4d78-9a0d-07ed11951767-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009898 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63ee0fd8-cf89-42c9-9278-2e5444abad06-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009906 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009915 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009922 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee0fd8-cf89-42c9-9278-2e5444abad06-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.009930 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2aeb2d23-5b08-4d78-9a0d-07ed11951767-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.011224 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.011292 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.011873 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.019285 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.038269 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tccp4\" (UniqueName: \"kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4\") pod \"controller-manager-64ffd7f445-tmklm\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.079789 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.287752 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:22 crc kubenswrapper[4955]: W1128 06:26:22.292533 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a6615ca_f632_40a8_aaec_e2365c59bdea.slice/crio-79ce2e6f643558a20382731fd4230e4b309542b0d2a481ed05ae2e99dd45dd2f WatchSource:0}: Error finding container 79ce2e6f643558a20382731fd4230e4b309542b0d2a481ed05ae2e99dd45dd2f: Status 404 returned error can't find the container with id 79ce2e6f643558a20382731fd4230e4b309542b0d2a481ed05ae2e99dd45dd2f Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.297776 4955 generic.go:334] "Generic (PLEG): container finished" podID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" containerID="8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e" exitCode=0 Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.297824 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.297837 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" event={"ID":"2aeb2d23-5b08-4d78-9a0d-07ed11951767","Type":"ContainerDied","Data":"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e"} Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.297859 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7b695b4b44-tw7f6" event={"ID":"2aeb2d23-5b08-4d78-9a0d-07ed11951767","Type":"ContainerDied","Data":"8caaa5091f0beb2585271603b7dfd6daf2b3e48b266979112464ad53e1d1217b"} Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.297875 4955 scope.go:117] "RemoveContainer" containerID="8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.299682 4955 generic.go:334] "Generic (PLEG): container finished" podID="63ee0fd8-cf89-42c9-9278-2e5444abad06" containerID="2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab" exitCode=0 Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.299706 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" event={"ID":"63ee0fd8-cf89-42c9-9278-2e5444abad06","Type":"ContainerDied","Data":"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab"} Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.299722 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" event={"ID":"63ee0fd8-cf89-42c9-9278-2e5444abad06","Type":"ContainerDied","Data":"e4084f489a11d649adf35fe704b2f4ef76416f90e2ecf86c09fc27bf8b43f818"} Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.299772 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.321883 4955 scope.go:117] "RemoveContainer" containerID="8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e" Nov 28 06:26:22 crc kubenswrapper[4955]: E1128 06:26:22.323411 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e\": container with ID starting with 8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e not found: ID does not exist" containerID="8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.323441 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e"} err="failed to get container status \"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e\": rpc error: code = NotFound desc = could not find container \"8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e\": container with ID starting with 8603f5a0d04fec41def4fd7e4b40e8d177aacfaf8a5e7c3fd321bb89bb03297e not found: ID does not exist" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.323459 4955 scope.go:117] "RemoveContainer" containerID="2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.328887 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.335482 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdb6944bd-m67rq"] Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.338948 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.341816 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7b695b4b44-tw7f6"] Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.374102 4955 scope.go:117] "RemoveContainer" containerID="2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab" Nov 28 06:26:22 crc kubenswrapper[4955]: E1128 06:26:22.374580 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab\": container with ID starting with 2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab not found: ID does not exist" containerID="2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab" Nov 28 06:26:22 crc kubenswrapper[4955]: I1128 06:26:22.374629 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab"} err="failed to get container status \"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab\": rpc error: code = NotFound desc = could not find container \"2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab\": container with ID starting with 2140026c3ada9e7682f8a203209b4990df11dad1c91481a81860af2d144ac4ab not found: ID does not exist" Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.312178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" event={"ID":"2a6615ca-f632-40a8-aaec-e2365c59bdea","Type":"ContainerStarted","Data":"87823868b3f9a3bc7dea6892a4b06188505c6139f2200fb3f714371637e413ba"} Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.312692 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" event={"ID":"2a6615ca-f632-40a8-aaec-e2365c59bdea","Type":"ContainerStarted","Data":"79ce2e6f643558a20382731fd4230e4b309542b0d2a481ed05ae2e99dd45dd2f"} Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.314550 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.320068 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.361877 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" podStartSLOduration=3.36185418 podStartE2EDuration="3.36185418s" podCreationTimestamp="2025-11-28 06:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:23.340647997 +0000 UTC m=+305.929903587" watchObservedRunningTime="2025-11-28 06:26:23.36185418 +0000 UTC m=+305.951109760" Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.717865 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aeb2d23-5b08-4d78-9a0d-07ed11951767" path="/var/lib/kubelet/pods/2aeb2d23-5b08-4d78-9a0d-07ed11951767/volumes" Nov 28 06:26:23 crc kubenswrapper[4955]: I1128 06:26:23.718608 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ee0fd8-cf89-42c9-9278-2e5444abad06" path="/var/lib/kubelet/pods/63ee0fd8-cf89-42c9-9278-2e5444abad06/volumes" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.112421 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.113194 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.117348 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.117481 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.117490 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.118900 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.119678 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.125396 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.125847 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.140261 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.140392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.140456 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwwk\" (UniqueName: \"kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.140562 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.241865 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gwwk\" (UniqueName: \"kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.242381 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.242553 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.242655 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.243714 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.244325 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.249895 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.262799 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gwwk\" (UniqueName: \"kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk\") pod \"route-controller-manager-d9fbbd44f-4ww4p\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.473911 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:24 crc kubenswrapper[4955]: I1128 06:26:24.692577 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:25 crc kubenswrapper[4955]: I1128 06:26:25.330165 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" event={"ID":"b613a232-cf97-4b35-9772-c2087bec6c28","Type":"ContainerStarted","Data":"8607595d4d9cc73e84bd43da7242ad970e409c792786daf6667381f82fb6e77c"} Nov 28 06:26:25 crc kubenswrapper[4955]: I1128 06:26:25.330553 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" event={"ID":"b613a232-cf97-4b35-9772-c2087bec6c28","Type":"ContainerStarted","Data":"0afd1d7cb6569b93ad5f7952d2e8b0faf3b0786602370e9db570889416da5934"} Nov 28 06:26:25 crc kubenswrapper[4955]: I1128 06:26:25.351449 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" podStartSLOduration=5.351430297 podStartE2EDuration="5.351430297s" podCreationTimestamp="2025-11-28 06:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:25.349344065 +0000 UTC m=+307.938599645" watchObservedRunningTime="2025-11-28 06:26:25.351430297 +0000 UTC m=+307.940685867" Nov 28 06:26:26 crc kubenswrapper[4955]: I1128 06:26:26.336170 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:26 crc kubenswrapper[4955]: I1128 06:26:26.345189 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:36 crc kubenswrapper[4955]: I1128 06:26:36.258722 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:36 crc kubenswrapper[4955]: I1128 06:26:36.259446 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" podUID="2a6615ca-f632-40a8-aaec-e2365c59bdea" containerName="controller-manager" containerID="cri-o://87823868b3f9a3bc7dea6892a4b06188505c6139f2200fb3f714371637e413ba" gracePeriod=30 Nov 28 06:26:36 crc kubenswrapper[4955]: I1128 06:26:36.389619 4955 generic.go:334] "Generic (PLEG): container finished" podID="2a6615ca-f632-40a8-aaec-e2365c59bdea" containerID="87823868b3f9a3bc7dea6892a4b06188505c6139f2200fb3f714371637e413ba" exitCode=0 Nov 28 06:26:36 crc kubenswrapper[4955]: I1128 06:26:36.389674 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" event={"ID":"2a6615ca-f632-40a8-aaec-e2365c59bdea","Type":"ContainerDied","Data":"87823868b3f9a3bc7dea6892a4b06188505c6139f2200fb3f714371637e413ba"} Nov 28 06:26:36 crc kubenswrapper[4955]: I1128 06:26:36.841972 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.004294 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert\") pod \"2a6615ca-f632-40a8-aaec-e2365c59bdea\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.004347 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tccp4\" (UniqueName: \"kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4\") pod \"2a6615ca-f632-40a8-aaec-e2365c59bdea\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.004386 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca\") pod \"2a6615ca-f632-40a8-aaec-e2365c59bdea\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.004445 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config\") pod \"2a6615ca-f632-40a8-aaec-e2365c59bdea\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.004521 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles\") pod \"2a6615ca-f632-40a8-aaec-e2365c59bdea\" (UID: \"2a6615ca-f632-40a8-aaec-e2365c59bdea\") " Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.005062 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a6615ca-f632-40a8-aaec-e2365c59bdea" (UID: "2a6615ca-f632-40a8-aaec-e2365c59bdea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.005125 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config" (OuterVolumeSpecName: "config") pod "2a6615ca-f632-40a8-aaec-e2365c59bdea" (UID: "2a6615ca-f632-40a8-aaec-e2365c59bdea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.005155 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a6615ca-f632-40a8-aaec-e2365c59bdea" (UID: "2a6615ca-f632-40a8-aaec-e2365c59bdea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.010138 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a6615ca-f632-40a8-aaec-e2365c59bdea" (UID: "2a6615ca-f632-40a8-aaec-e2365c59bdea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.010964 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4" (OuterVolumeSpecName: "kube-api-access-tccp4") pod "2a6615ca-f632-40a8-aaec-e2365c59bdea" (UID: "2a6615ca-f632-40a8-aaec-e2365c59bdea"). InnerVolumeSpecName "kube-api-access-tccp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.106331 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.106380 4955 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.106396 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a6615ca-f632-40a8-aaec-e2365c59bdea-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.106410 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tccp4\" (UniqueName: \"kubernetes.io/projected/2a6615ca-f632-40a8-aaec-e2365c59bdea-kube-api-access-tccp4\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.106423 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a6615ca-f632-40a8-aaec-e2365c59bdea-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.398278 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" event={"ID":"2a6615ca-f632-40a8-aaec-e2365c59bdea","Type":"ContainerDied","Data":"79ce2e6f643558a20382731fd4230e4b309542b0d2a481ed05ae2e99dd45dd2f"} Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.398317 4955 scope.go:117] "RemoveContainer" containerID="87823868b3f9a3bc7dea6892a4b06188505c6139f2200fb3f714371637e413ba" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.398432 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64ffd7f445-tmklm" Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.439704 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.443918 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64ffd7f445-tmklm"] Nov 28 06:26:37 crc kubenswrapper[4955]: I1128 06:26:37.710674 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6615ca-f632-40a8-aaec-e2365c59bdea" path="/var/lib/kubelet/pods/2a6615ca-f632-40a8-aaec-e2365c59bdea/volumes" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.119716 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55f8659977-nshkk"] Nov 28 06:26:38 crc kubenswrapper[4955]: E1128 06:26:38.120024 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a6615ca-f632-40a8-aaec-e2365c59bdea" containerName="controller-manager" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.120040 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a6615ca-f632-40a8-aaec-e2365c59bdea" containerName="controller-manager" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.120202 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a6615ca-f632-40a8-aaec-e2365c59bdea" containerName="controller-manager" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.120651 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.129296 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.131613 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.131680 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.131914 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.132132 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.132432 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.148410 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.152485 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f8659977-nshkk"] Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.221667 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-serving-cert\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.221734 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-client-ca\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.221762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfp5\" (UniqueName: \"kubernetes.io/projected/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-kube-api-access-kkfp5\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.221848 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-proxy-ca-bundles\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.221878 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-config\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.322499 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfp5\" (UniqueName: \"kubernetes.io/projected/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-kube-api-access-kkfp5\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.322611 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-proxy-ca-bundles\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.322651 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-config\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.322728 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-serving-cert\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.322779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-client-ca\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.323880 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-client-ca\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.324494 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-config\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.325806 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-proxy-ca-bundles\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.327555 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-serving-cert\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.341204 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfp5\" (UniqueName: \"kubernetes.io/projected/70d91cd4-5121-4cc5-b2c2-b722d1a75ed6-kube-api-access-kkfp5\") pod \"controller-manager-55f8659977-nshkk\" (UID: \"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6\") " pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.445633 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:38 crc kubenswrapper[4955]: I1128 06:26:38.862988 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f8659977-nshkk"] Nov 28 06:26:39 crc kubenswrapper[4955]: I1128 06:26:39.410743 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" event={"ID":"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6","Type":"ContainerStarted","Data":"40b63d66741381325e86c2d4857dfdeb9baa9e55de560766f3743ce237104b73"} Nov 28 06:26:39 crc kubenswrapper[4955]: I1128 06:26:39.410786 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" event={"ID":"70d91cd4-5121-4cc5-b2c2-b722d1a75ed6","Type":"ContainerStarted","Data":"c96fd12a225e65877a93231f4a15c8638e889cc1b0e4597b490f685c938e0d2b"} Nov 28 06:26:39 crc kubenswrapper[4955]: I1128 06:26:39.411037 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:39 crc kubenswrapper[4955]: I1128 06:26:39.426353 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" podStartSLOduration=3.426333925 podStartE2EDuration="3.426333925s" podCreationTimestamp="2025-11-28 06:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:39.422863623 +0000 UTC m=+322.012119203" watchObservedRunningTime="2025-11-28 06:26:39.426333925 +0000 UTC m=+322.015589495" Nov 28 06:26:39 crc kubenswrapper[4955]: I1128 06:26:39.428283 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55f8659977-nshkk" Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.959438 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.960196 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hrjcd" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="registry-server" containerID="cri-o://d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1" gracePeriod=30 Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.972002 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.972231 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-svtkn" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="registry-server" containerID="cri-o://b9f7861d69caae0cd19549bf8dcf796e5326ad61693db193aed61455ed379a47" gracePeriod=30 Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.980011 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.986434 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" containerID="cri-o://9010462ead1dbaeeaed59ce5e623b261b220200f40eb37aa54f85dd28b2ad2a0" gracePeriod=30 Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.987563 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:26:49 crc kubenswrapper[4955]: I1128 06:26:49.987770 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mlp4t" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="registry-server" containerID="cri-o://83487ab3468a2af9c2653ecedb977192561af661d9cc3a272749b5a910cb2f97" gracePeriod=30 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.003945 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nsv62"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.004666 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.006776 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.006983 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jhnw8" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="registry-server" containerID="cri-o://d8a2b04653cc692c9d2010af704f26bd1830329dacce2f0614c994997660fb9f" gracePeriod=30 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.029170 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nsv62"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.175396 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.175829 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxn8\" (UniqueName: \"kubernetes.io/projected/91faab58-aa75-49f0-bf54-3de5fccd9ead-kube-api-access-vxxn8\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.175859 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.277325 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxn8\" (UniqueName: \"kubernetes.io/projected/91faab58-aa75-49f0-bf54-3de5fccd9ead-kube-api-access-vxxn8\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.277373 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.277413 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.279810 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.283640 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/91faab58-aa75-49f0-bf54-3de5fccd9ead-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.318058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxn8\" (UniqueName: \"kubernetes.io/projected/91faab58-aa75-49f0-bf54-3de5fccd9ead-kube-api-access-vxxn8\") pod \"marketplace-operator-79b997595-nsv62\" (UID: \"91faab58-aa75-49f0-bf54-3de5fccd9ead\") " pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.332657 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.475953 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.489861 4955 generic.go:334] "Generic (PLEG): container finished" podID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerID="b9f7861d69caae0cd19549bf8dcf796e5326ad61693db193aed61455ed379a47" exitCode=0 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.489945 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerDied","Data":"b9f7861d69caae0cd19549bf8dcf796e5326ad61693db193aed61455ed379a47"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.493665 4955 generic.go:334] "Generic (PLEG): container finished" podID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerID="9010462ead1dbaeeaed59ce5e623b261b220200f40eb37aa54f85dd28b2ad2a0" exitCode=0 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.493832 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" event={"ID":"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6","Type":"ContainerDied","Data":"9010462ead1dbaeeaed59ce5e623b261b220200f40eb37aa54f85dd28b2ad2a0"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.497240 4955 generic.go:334] "Generic (PLEG): container finished" podID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerID="d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1" exitCode=0 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.497311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerDied","Data":"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.497342 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrjcd" event={"ID":"f1a74a4b-b614-48f9-bc76-26f457ae5acd","Type":"ContainerDied","Data":"ca437bc92177b48f72412aa5d707a90e5aa1cfe192b95b0da3b87843ba24dbd2"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.497358 4955 scope.go:117] "RemoveContainer" containerID="d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.497469 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrjcd" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.507005 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerID="d8a2b04653cc692c9d2010af704f26bd1830329dacce2f0614c994997660fb9f" exitCode=0 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.507084 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerDied","Data":"d8a2b04653cc692c9d2010af704f26bd1830329dacce2f0614c994997660fb9f"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.509663 4955 generic.go:334] "Generic (PLEG): container finished" podID="29481cf1-0690-4067-b85d-b753b59d584d" containerID="83487ab3468a2af9c2653ecedb977192561af661d9cc3a272749b5a910cb2f97" exitCode=0 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.509734 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerDied","Data":"83487ab3468a2af9c2653ecedb977192561af661d9cc3a272749b5a910cb2f97"} Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.519106 4955 scope.go:117] "RemoveContainer" containerID="adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.535410 4955 scope.go:117] "RemoveContainer" containerID="0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.558318 4955 scope.go:117] "RemoveContainer" containerID="d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1" Nov 28 06:26:50 crc kubenswrapper[4955]: E1128 06:26:50.560947 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1\": container with ID starting with d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1 not found: ID does not exist" containerID="d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.560983 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1"} err="failed to get container status \"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1\": rpc error: code = NotFound desc = could not find container \"d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1\": container with ID starting with d3fbf2f6fb063c8719337054b69d9a55b8f128f366fefd5cfdd560ddb90f58c1 not found: ID does not exist" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.561005 4955 scope.go:117] "RemoveContainer" containerID="adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343" Nov 28 06:26:50 crc kubenswrapper[4955]: E1128 06:26:50.561986 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343\": container with ID starting with adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343 not found: ID does not exist" containerID="adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.562032 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343"} err="failed to get container status \"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343\": rpc error: code = NotFound desc = could not find container \"adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343\": container with ID starting with adad226cf562ec88f8edbe319ef70c59c419af99bc491a3b4e7a01eff14a3343 not found: ID does not exist" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.562063 4955 scope.go:117] "RemoveContainer" containerID="0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775" Nov 28 06:26:50 crc kubenswrapper[4955]: E1128 06:26:50.563004 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775\": container with ID starting with 0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775 not found: ID does not exist" containerID="0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.563076 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775"} err="failed to get container status \"0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775\": rpc error: code = NotFound desc = could not find container \"0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775\": container with ID starting with 0bc9e58e28bf441d3b7dcd53e3a3d4bf18a912a71323722333884d595bbfc775 not found: ID does not exist" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.580276 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities\") pod \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.580419 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bz2l\" (UniqueName: \"kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l\") pod \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.580465 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content\") pod \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\" (UID: \"f1a74a4b-b614-48f9-bc76-26f457ae5acd\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.581585 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities" (OuterVolumeSpecName: "utilities") pod "f1a74a4b-b614-48f9-bc76-26f457ae5acd" (UID: "f1a74a4b-b614-48f9-bc76-26f457ae5acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.587695 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l" (OuterVolumeSpecName: "kube-api-access-8bz2l") pod "f1a74a4b-b614-48f9-bc76-26f457ae5acd" (UID: "f1a74a4b-b614-48f9-bc76-26f457ae5acd"). InnerVolumeSpecName "kube-api-access-8bz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.634768 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1a74a4b-b614-48f9-bc76-26f457ae5acd" (UID: "f1a74a4b-b614-48f9-bc76-26f457ae5acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.682196 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.682232 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a74a4b-b614-48f9-bc76-26f457ae5acd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.682242 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bz2l\" (UniqueName: \"kubernetes.io/projected/f1a74a4b-b614-48f9-bc76-26f457ae5acd-kube-api-access-8bz2l\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.694086 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.718907 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.724622 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.729970 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.819913 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.822540 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hrjcd"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5csd\" (UniqueName: \"kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd\") pod \"bd3aeed8-258b-459f-bb90-be61ddf70b91\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884424 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities\") pod \"bd3aeed8-258b-459f-bb90-be61ddf70b91\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884454 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz6f4\" (UniqueName: \"kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4\") pod \"29481cf1-0690-4067-b85d-b753b59d584d\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884527 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities\") pod \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884586 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities\") pod \"29481cf1-0690-4067-b85d-b753b59d584d\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content\") pod \"29481cf1-0690-4067-b85d-b753b59d584d\" (UID: \"29481cf1-0690-4067-b85d-b753b59d584d\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884638 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics\") pod \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884671 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd42l\" (UniqueName: \"kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l\") pod \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884698 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca\") pod \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\" (UID: \"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884727 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn57d\" (UniqueName: \"kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d\") pod \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.884768 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content\") pod \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\" (UID: \"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.885784 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities" (OuterVolumeSpecName: "utilities") pod "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" (UID: "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.886056 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities" (OuterVolumeSpecName: "utilities") pod "bd3aeed8-258b-459f-bb90-be61ddf70b91" (UID: "bd3aeed8-258b-459f-bb90-be61ddf70b91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.886140 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities" (OuterVolumeSpecName: "utilities") pod "29481cf1-0690-4067-b85d-b753b59d584d" (UID: "29481cf1-0690-4067-b85d-b753b59d584d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.886238 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" (UID: "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.886480 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content\") pod \"bd3aeed8-258b-459f-bb90-be61ddf70b91\" (UID: \"bd3aeed8-258b-459f-bb90-be61ddf70b91\") " Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.886761 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.887069 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.887087 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.887102 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.887179 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd" (OuterVolumeSpecName: "kube-api-access-v5csd") pod "bd3aeed8-258b-459f-bb90-be61ddf70b91" (UID: "bd3aeed8-258b-459f-bb90-be61ddf70b91"). InnerVolumeSpecName "kube-api-access-v5csd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.888202 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l" (OuterVolumeSpecName: "kube-api-access-dd42l") pod "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" (UID: "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6"). InnerVolumeSpecName "kube-api-access-dd42l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.888960 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" (UID: "44ffa22c-63e2-4eec-90df-aaad3c7cdbe6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.890618 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d" (OuterVolumeSpecName: "kube-api-access-jn57d") pod "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" (UID: "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6"). InnerVolumeSpecName "kube-api-access-jn57d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.902236 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4" (OuterVolumeSpecName: "kube-api-access-bz6f4") pod "29481cf1-0690-4067-b85d-b753b59d584d" (UID: "29481cf1-0690-4067-b85d-b753b59d584d"). InnerVolumeSpecName "kube-api-access-bz6f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.928086 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nsv62"] Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.936411 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29481cf1-0690-4067-b85d-b753b59d584d" (UID: "29481cf1-0690-4067-b85d-b753b59d584d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: W1128 06:26:50.938485 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91faab58_aa75_49f0_bf54_3de5fccd9ead.slice/crio-d4421bb0e00faf6d8ea180c1707144a1415c7fed7e1fa1d802ded02c7fb86a91 WatchSource:0}: Error finding container d4421bb0e00faf6d8ea180c1707144a1415c7fed7e1fa1d802ded02c7fb86a91: Status 404 returned error can't find the container with id d4421bb0e00faf6d8ea180c1707144a1415c7fed7e1fa1d802ded02c7fb86a91 Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.969366 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd3aeed8-258b-459f-bb90-be61ddf70b91" (UID: "bd3aeed8-258b-459f-bb90-be61ddf70b91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988688 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd3aeed8-258b-459f-bb90-be61ddf70b91-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988725 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5csd\" (UniqueName: \"kubernetes.io/projected/bd3aeed8-258b-459f-bb90-be61ddf70b91-kube-api-access-v5csd\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988918 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz6f4\" (UniqueName: \"kubernetes.io/projected/29481cf1-0690-4067-b85d-b753b59d584d-kube-api-access-bz6f4\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988964 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29481cf1-0690-4067-b85d-b753b59d584d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988976 4955 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.988989 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd42l\" (UniqueName: \"kubernetes.io/projected/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6-kube-api-access-dd42l\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:50 crc kubenswrapper[4955]: I1128 06:26:50.989001 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn57d\" (UniqueName: \"kubernetes.io/projected/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-kube-api-access-jn57d\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.003546 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" (UID: "a4ff587c-9685-4dd4-9fb4-44f1f640b5c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.089754 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.517210 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" event={"ID":"91faab58-aa75-49f0-bf54-3de5fccd9ead","Type":"ContainerStarted","Data":"d75febad2421a19d9f885b757fddf2caa56405783de31a9f1c35ab7c94a05ac6"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.517251 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" event={"ID":"91faab58-aa75-49f0-bf54-3de5fccd9ead","Type":"ContainerStarted","Data":"d4421bb0e00faf6d8ea180c1707144a1415c7fed7e1fa1d802ded02c7fb86a91"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.517376 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.520078 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.521193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jhnw8" event={"ID":"a4ff587c-9685-4dd4-9fb4-44f1f640b5c6","Type":"ContainerDied","Data":"22265d6e235f4df70bdb51ce611512ab4190dc0c2168ad96c10d0d0b41001d09"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.521261 4955 scope.go:117] "RemoveContainer" containerID="d8a2b04653cc692c9d2010af704f26bd1830329dacce2f0614c994997660fb9f" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.521272 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jhnw8" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.523817 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mlp4t" event={"ID":"29481cf1-0690-4067-b85d-b753b59d584d","Type":"ContainerDied","Data":"c8cd692b52c66961566c4f968cdd19e680b60749510038c1d61068487d661b2a"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.523920 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mlp4t" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.526549 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-svtkn" event={"ID":"bd3aeed8-258b-459f-bb90-be61ddf70b91","Type":"ContainerDied","Data":"aef422fdc8a18432642626163142080756b283c16237ec381679b174a8ffc1e9"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.526592 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-svtkn" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.533285 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" event={"ID":"44ffa22c-63e2-4eec-90df-aaad3c7cdbe6","Type":"ContainerDied","Data":"3d5bd308295fbb5e0353961c4014c4b333cf5443a2e65f0e4f60451bfe2bb1f5"} Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.533386 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bf8xd" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.539003 4955 scope.go:117] "RemoveContainer" containerID="17ef45a1ad7eac34366d2f4a6aa9e5f8a8965ec71120f847bb4f25501d080400" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.538993 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nsv62" podStartSLOduration=2.538975039 podStartE2EDuration="2.538975039s" podCreationTimestamp="2025-11-28 06:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:26:51.53604277 +0000 UTC m=+334.125298360" watchObservedRunningTime="2025-11-28 06:26:51.538975039 +0000 UTC m=+334.128230609" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.620897 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.627051 4955 scope.go:117] "RemoveContainer" containerID="834494de76a4715b1db3e2f182be5a35c97eab420e6a58c9384ae091af116ef5" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.630648 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-svtkn"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.636480 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.648006 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bf8xd"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.650964 4955 scope.go:117] "RemoveContainer" containerID="83487ab3468a2af9c2653ecedb977192561af661d9cc3a272749b5a910cb2f97" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.653685 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.656644 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mlp4t"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.662243 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.665718 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jhnw8"] Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.668165 4955 scope.go:117] "RemoveContainer" containerID="64df6fff9348bf88e12210b17e569aa800fbda367e403c5a7101c1e2ead25cdf" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.679877 4955 scope.go:117] "RemoveContainer" containerID="711b223c15ecf8bcc915c1cd16b4a2e3eeda1dfe937f58c3eeadb8a2bc0cf8fa" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.695653 4955 scope.go:117] "RemoveContainer" containerID="b9f7861d69caae0cd19549bf8dcf796e5326ad61693db193aed61455ed379a47" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.710151 4955 scope.go:117] "RemoveContainer" containerID="36dd1337ce0773d11b4e602d163624af3a7802d0e809486a068c52e83874e8e3" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.711101 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29481cf1-0690-4067-b85d-b753b59d584d" path="/var/lib/kubelet/pods/29481cf1-0690-4067-b85d-b753b59d584d/volumes" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.711717 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" path="/var/lib/kubelet/pods/44ffa22c-63e2-4eec-90df-aaad3c7cdbe6/volumes" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.713080 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" path="/var/lib/kubelet/pods/a4ff587c-9685-4dd4-9fb4-44f1f640b5c6/volumes" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.714293 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" path="/var/lib/kubelet/pods/bd3aeed8-258b-459f-bb90-be61ddf70b91/volumes" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.714906 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" path="/var/lib/kubelet/pods/f1a74a4b-b614-48f9-bc76-26f457ae5acd/volumes" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.721939 4955 scope.go:117] "RemoveContainer" containerID="79b6ce7cd055af5ec7f80e5d4a86501dd25154519891304e65caedd9c956bf10" Nov 28 06:26:51 crc kubenswrapper[4955]: I1128 06:26:51.733886 4955 scope.go:117] "RemoveContainer" containerID="9010462ead1dbaeeaed59ce5e623b261b220200f40eb37aa54f85dd28b2ad2a0" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176273 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176446 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176457 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176465 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176471 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176480 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176486 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176495 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176554 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176566 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176572 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176580 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176585 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176594 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176600 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176609 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176616 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176623 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176630 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176637 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176643 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176652 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176658 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176670 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176676 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="extract-content" Nov 28 06:26:52 crc kubenswrapper[4955]: E1128 06:26:52.176683 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176689 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="extract-utilities" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176773 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="44ffa22c-63e2-4eec-90df-aaad3c7cdbe6" containerName="marketplace-operator" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176784 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ff587c-9685-4dd4-9fb4-44f1f640b5c6" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176791 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="29481cf1-0690-4067-b85d-b753b59d584d" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176802 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a74a4b-b614-48f9-bc76-26f457ae5acd" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.176809 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3aeed8-258b-459f-bb90-be61ddf70b91" containerName="registry-server" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.177420 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.180860 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.193001 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.319635 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lccv9\" (UniqueName: \"kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.319982 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.320016 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.384281 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5kxmn"] Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.385333 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.390833 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.395435 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kxmn"] Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.420849 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.420894 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.420934 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lccv9\" (UniqueName: \"kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.421569 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.422725 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.441843 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lccv9\" (UniqueName: \"kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9\") pod \"redhat-marketplace-kbjwb\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.507485 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.522171 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-utilities\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.522367 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsff5\" (UniqueName: \"kubernetes.io/projected/c4041c69-e867-4601-977f-ffee8577f28c-kube-api-access-rsff5\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.522497 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-catalog-content\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.623404 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-catalog-content\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.623495 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-utilities\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.623565 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsff5\" (UniqueName: \"kubernetes.io/projected/c4041c69-e867-4601-977f-ffee8577f28c-kube-api-access-rsff5\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.624643 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-utilities\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.624642 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4041c69-e867-4601-977f-ffee8577f28c-catalog-content\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.647318 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsff5\" (UniqueName: \"kubernetes.io/projected/c4041c69-e867-4601-977f-ffee8577f28c-kube-api-access-rsff5\") pod \"redhat-operators-5kxmn\" (UID: \"c4041c69-e867-4601-977f-ffee8577f28c\") " pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.715583 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:26:52 crc kubenswrapper[4955]: I1128 06:26:52.893220 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:26:52 crc kubenswrapper[4955]: W1128 06:26:52.894105 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ee1e24f_6b2f_4b13_b8a4_7f9a24de7e95.slice/crio-fec9ca396a7b90d85539800731ae28a40ce1b62bf22f09ed23caa1566ceeff2d WatchSource:0}: Error finding container fec9ca396a7b90d85539800731ae28a40ce1b62bf22f09ed23caa1566ceeff2d: Status 404 returned error can't find the container with id fec9ca396a7b90d85539800731ae28a40ce1b62bf22f09ed23caa1566ceeff2d Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.095498 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5kxmn"] Nov 28 06:26:53 crc kubenswrapper[4955]: W1128 06:26:53.101968 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4041c69_e867_4601_977f_ffee8577f28c.slice/crio-8804718bc011a937df50709e98011719f85b80c54acd1316c9970f6f7612f9af WatchSource:0}: Error finding container 8804718bc011a937df50709e98011719f85b80c54acd1316c9970f6f7612f9af: Status 404 returned error can't find the container with id 8804718bc011a937df50709e98011719f85b80c54acd1316c9970f6f7612f9af Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.393773 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.395250 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.552915 4955 generic.go:334] "Generic (PLEG): container finished" podID="c4041c69-e867-4601-977f-ffee8577f28c" containerID="1d2bc79bdc3db1d6db7a63f04bf66f0ecbf2742841347a122f0d0d36c5849f31" exitCode=0 Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.554290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kxmn" event={"ID":"c4041c69-e867-4601-977f-ffee8577f28c","Type":"ContainerDied","Data":"1d2bc79bdc3db1d6db7a63f04bf66f0ecbf2742841347a122f0d0d36c5849f31"} Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.554348 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kxmn" event={"ID":"c4041c69-e867-4601-977f-ffee8577f28c","Type":"ContainerStarted","Data":"8804718bc011a937df50709e98011719f85b80c54acd1316c9970f6f7612f9af"} Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.561207 4955 generic.go:334] "Generic (PLEG): container finished" podID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerID="aa1a56754d23dc0389691b0125402cc5af62d8630a9a6934d6d5bd1cf08694f3" exitCode=0 Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.562127 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerDied","Data":"aa1a56754d23dc0389691b0125402cc5af62d8630a9a6934d6d5bd1cf08694f3"} Nov 28 06:26:53 crc kubenswrapper[4955]: I1128 06:26:53.562202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerStarted","Data":"fec9ca396a7b90d85539800731ae28a40ce1b62bf22f09ed23caa1566ceeff2d"} Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.570591 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerStarted","Data":"532711fe15c4265bf78f1d7b5ea5fa2eff9ef76aa2e009826cdee9890e61010d"} Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.573380 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kxmn" event={"ID":"c4041c69-e867-4601-977f-ffee8577f28c","Type":"ContainerStarted","Data":"d2a3633e479949fb8bddcadecaa541133307e55ff159f3738db576b58c4650f9"} Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.575395 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tqdwm"] Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.576339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.590084 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.598655 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqdwm"] Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.648990 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-catalog-content\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.649052 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-utilities\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.649094 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6c7q\" (UniqueName: \"kubernetes.io/projected/e33445dd-1d02-47a1-bb19-42033b44eaa4-kube-api-access-v6c7q\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.750762 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-catalog-content\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.750843 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-utilities\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.750901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6c7q\" (UniqueName: \"kubernetes.io/projected/e33445dd-1d02-47a1-bb19-42033b44eaa4-kube-api-access-v6c7q\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.752066 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-catalog-content\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.752287 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e33445dd-1d02-47a1-bb19-42033b44eaa4-utilities\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.771947 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.772972 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.774945 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.782956 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.807400 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6c7q\" (UniqueName: \"kubernetes.io/projected/e33445dd-1d02-47a1-bb19-42033b44eaa4-kube-api-access-v6c7q\") pod \"community-operators-tqdwm\" (UID: \"e33445dd-1d02-47a1-bb19-42033b44eaa4\") " pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.910578 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.953286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs9xn\" (UniqueName: \"kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.953439 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:54 crc kubenswrapper[4955]: I1128 06:26:54.953488 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.054547 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.054954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.055011 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs9xn\" (UniqueName: \"kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.055815 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.056073 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.073001 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs9xn\" (UniqueName: \"kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn\") pod \"certified-operators-4q2nl\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.136742 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.314777 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tqdwm"] Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.524763 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.581058 4955 generic.go:334] "Generic (PLEG): container finished" podID="c4041c69-e867-4601-977f-ffee8577f28c" containerID="d2a3633e479949fb8bddcadecaa541133307e55ff159f3738db576b58c4650f9" exitCode=0 Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.581131 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kxmn" event={"ID":"c4041c69-e867-4601-977f-ffee8577f28c","Type":"ContainerDied","Data":"d2a3633e479949fb8bddcadecaa541133307e55ff159f3738db576b58c4650f9"} Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.587895 4955 generic.go:334] "Generic (PLEG): container finished" podID="e33445dd-1d02-47a1-bb19-42033b44eaa4" containerID="f52dab0fc3c40d0f3ed91ea8e034655864c7c720442fe322acaa1b80db08482a" exitCode=0 Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.587965 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdwm" event={"ID":"e33445dd-1d02-47a1-bb19-42033b44eaa4","Type":"ContainerDied","Data":"f52dab0fc3c40d0f3ed91ea8e034655864c7c720442fe322acaa1b80db08482a"} Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.587990 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdwm" event={"ID":"e33445dd-1d02-47a1-bb19-42033b44eaa4","Type":"ContainerStarted","Data":"a9f22eb064e7d9c24b25208e9ef220c19506aa0b75e356eec9653c33be338133"} Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.592876 4955 generic.go:334] "Generic (PLEG): container finished" podID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerID="532711fe15c4265bf78f1d7b5ea5fa2eff9ef76aa2e009826cdee9890e61010d" exitCode=0 Nov 28 06:26:55 crc kubenswrapper[4955]: I1128 06:26:55.592918 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerDied","Data":"532711fe15c4265bf78f1d7b5ea5fa2eff9ef76aa2e009826cdee9890e61010d"} Nov 28 06:26:55 crc kubenswrapper[4955]: W1128 06:26:55.605103 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d93d94f_56a0_44c9_9c1c_7d91f7b9d9ae.slice/crio-59ebf05076bb37dcab9935f6eaea7d41b5cda2b5a1aa9d67ee73f9b551ca7448 WatchSource:0}: Error finding container 59ebf05076bb37dcab9935f6eaea7d41b5cda2b5a1aa9d67ee73f9b551ca7448: Status 404 returned error can't find the container with id 59ebf05076bb37dcab9935f6eaea7d41b5cda2b5a1aa9d67ee73f9b551ca7448 Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.258595 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.259269 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" podUID="b613a232-cf97-4b35-9772-c2087bec6c28" containerName="route-controller-manager" containerID="cri-o://8607595d4d9cc73e84bd43da7242ad970e409c792786daf6667381f82fb6e77c" gracePeriod=30 Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.600178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerStarted","Data":"8a588daa6fb2aec840fdc404fd0ee7b7e2058d4fc258e37cd5858cad283865fc"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.603314 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5kxmn" event={"ID":"c4041c69-e867-4601-977f-ffee8577f28c","Type":"ContainerStarted","Data":"bbdc53065ec726aa4556546e32dbd03a0606958f92649d8cde5a6df50013197e"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.605475 4955 generic.go:334] "Generic (PLEG): container finished" podID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerID="16d22873994a205943f59c9632900ac1d856cc6d13ec0944c4117fb443573dee" exitCode=0 Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.605582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerDied","Data":"16d22873994a205943f59c9632900ac1d856cc6d13ec0944c4117fb443573dee"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.605607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerStarted","Data":"59ebf05076bb37dcab9935f6eaea7d41b5cda2b5a1aa9d67ee73f9b551ca7448"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.607998 4955 generic.go:334] "Generic (PLEG): container finished" podID="b613a232-cf97-4b35-9772-c2087bec6c28" containerID="8607595d4d9cc73e84bd43da7242ad970e409c792786daf6667381f82fb6e77c" exitCode=0 Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.608081 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" event={"ID":"b613a232-cf97-4b35-9772-c2087bec6c28","Type":"ContainerDied","Data":"8607595d4d9cc73e84bd43da7242ad970e409c792786daf6667381f82fb6e77c"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.610423 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdwm" event={"ID":"e33445dd-1d02-47a1-bb19-42033b44eaa4","Type":"ContainerStarted","Data":"b690e378041b27bf5be7bd641e2c8c46275bc1085b0b9e192c1a7adc80b9b840"} Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.624011 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kbjwb" podStartSLOduration=2.00023389 podStartE2EDuration="4.62399409s" podCreationTimestamp="2025-11-28 06:26:52 +0000 UTC" firstStartedPulling="2025-11-28 06:26:53.563295188 +0000 UTC m=+336.152550758" lastFinishedPulling="2025-11-28 06:26:56.187055378 +0000 UTC m=+338.776310958" observedRunningTime="2025-11-28 06:26:56.622996513 +0000 UTC m=+339.212252103" watchObservedRunningTime="2025-11-28 06:26:56.62399409 +0000 UTC m=+339.213249660" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.656287 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5kxmn" podStartSLOduration=2.229833375 podStartE2EDuration="4.65626616s" podCreationTimestamp="2025-11-28 06:26:52 +0000 UTC" firstStartedPulling="2025-11-28 06:26:53.555103733 +0000 UTC m=+336.144359353" lastFinishedPulling="2025-11-28 06:26:55.981536568 +0000 UTC m=+338.570792138" observedRunningTime="2025-11-28 06:26:56.65278444 +0000 UTC m=+339.242040040" watchObservedRunningTime="2025-11-28 06:26:56.65626616 +0000 UTC m=+339.245521730" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.842075 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.977603 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert\") pod \"b613a232-cf97-4b35-9772-c2087bec6c28\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.977668 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gwwk\" (UniqueName: \"kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk\") pod \"b613a232-cf97-4b35-9772-c2087bec6c28\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.977768 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca\") pod \"b613a232-cf97-4b35-9772-c2087bec6c28\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.977793 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config\") pod \"b613a232-cf97-4b35-9772-c2087bec6c28\" (UID: \"b613a232-cf97-4b35-9772-c2087bec6c28\") " Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.978712 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config" (OuterVolumeSpecName: "config") pod "b613a232-cf97-4b35-9772-c2087bec6c28" (UID: "b613a232-cf97-4b35-9772-c2087bec6c28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.979058 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca" (OuterVolumeSpecName: "client-ca") pod "b613a232-cf97-4b35-9772-c2087bec6c28" (UID: "b613a232-cf97-4b35-9772-c2087bec6c28"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.983642 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b613a232-cf97-4b35-9772-c2087bec6c28" (UID: "b613a232-cf97-4b35-9772-c2087bec6c28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:26:56 crc kubenswrapper[4955]: I1128 06:26:56.983897 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk" (OuterVolumeSpecName: "kube-api-access-7gwwk") pod "b613a232-cf97-4b35-9772-c2087bec6c28" (UID: "b613a232-cf97-4b35-9772-c2087bec6c28"). InnerVolumeSpecName "kube-api-access-7gwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.078880 4955 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b613a232-cf97-4b35-9772-c2087bec6c28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.078923 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gwwk\" (UniqueName: \"kubernetes.io/projected/b613a232-cf97-4b35-9772-c2087bec6c28-kube-api-access-7gwwk\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.078938 4955 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.078950 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b613a232-cf97-4b35-9772-c2087bec6c28-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.616883 4955 generic.go:334] "Generic (PLEG): container finished" podID="e33445dd-1d02-47a1-bb19-42033b44eaa4" containerID="b690e378041b27bf5be7bd641e2c8c46275bc1085b0b9e192c1a7adc80b9b840" exitCode=0 Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.616972 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdwm" event={"ID":"e33445dd-1d02-47a1-bb19-42033b44eaa4","Type":"ContainerDied","Data":"b690e378041b27bf5be7bd641e2c8c46275bc1085b0b9e192c1a7adc80b9b840"} Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.620875 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerStarted","Data":"a8ac454de89fb8d1c432484bfe08b77118d30aa827c1b2223f3fefaa542d9a94"} Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.623139 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" event={"ID":"b613a232-cf97-4b35-9772-c2087bec6c28","Type":"ContainerDied","Data":"0afd1d7cb6569b93ad5f7952d2e8b0faf3b0786602370e9db570889416da5934"} Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.623253 4955 scope.go:117] "RemoveContainer" containerID="8607595d4d9cc73e84bd43da7242ad970e409c792786daf6667381f82fb6e77c" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.623417 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p" Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.667801 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.672688 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9fbbd44f-4ww4p"] Nov 28 06:26:57 crc kubenswrapper[4955]: I1128 06:26:57.715645 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b613a232-cf97-4b35-9772-c2087bec6c28" path="/var/lib/kubelet/pods/b613a232-cf97-4b35-9772-c2087bec6c28/volumes" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.128208 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg"] Nov 28 06:26:58 crc kubenswrapper[4955]: E1128 06:26:58.128537 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b613a232-cf97-4b35-9772-c2087bec6c28" containerName="route-controller-manager" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.128564 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b613a232-cf97-4b35-9772-c2087bec6c28" containerName="route-controller-manager" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.128729 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b613a232-cf97-4b35-9772-c2087bec6c28" containerName="route-controller-manager" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.129258 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142098 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142386 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142494 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142663 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142757 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142856 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.142988 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg"] Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.190662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf48e98b-1d77-4ff1-9c2b-248e988967cb-serving-cert\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.190700 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-config\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.190723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-client-ca\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.190915 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8hn\" (UniqueName: \"kubernetes.io/projected/cf48e98b-1d77-4ff1-9c2b-248e988967cb-kube-api-access-4m8hn\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.291625 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8hn\" (UniqueName: \"kubernetes.io/projected/cf48e98b-1d77-4ff1-9c2b-248e988967cb-kube-api-access-4m8hn\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.291667 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf48e98b-1d77-4ff1-9c2b-248e988967cb-serving-cert\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.291691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-config\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.291718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-client-ca\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.292707 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-client-ca\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.292892 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf48e98b-1d77-4ff1-9c2b-248e988967cb-config\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.300081 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf48e98b-1d77-4ff1-9c2b-248e988967cb-serving-cert\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.308044 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8hn\" (UniqueName: \"kubernetes.io/projected/cf48e98b-1d77-4ff1-9c2b-248e988967cb-kube-api-access-4m8hn\") pod \"route-controller-manager-5b9979494b-gpddg\" (UID: \"cf48e98b-1d77-4ff1-9c2b-248e988967cb\") " pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.534382 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.636718 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tqdwm" event={"ID":"e33445dd-1d02-47a1-bb19-42033b44eaa4","Type":"ContainerStarted","Data":"0b893d34e59aa650e916d3b7122819c205579aca515d328efa1d5dcc6c5e1f2e"} Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.639739 4955 generic.go:334] "Generic (PLEG): container finished" podID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerID="a8ac454de89fb8d1c432484bfe08b77118d30aa827c1b2223f3fefaa542d9a94" exitCode=0 Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.639780 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerDied","Data":"a8ac454de89fb8d1c432484bfe08b77118d30aa827c1b2223f3fefaa542d9a94"} Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.667422 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tqdwm" podStartSLOduration=2.141597537 podStartE2EDuration="4.667396587s" podCreationTimestamp="2025-11-28 06:26:54 +0000 UTC" firstStartedPulling="2025-11-28 06:26:55.588947225 +0000 UTC m=+338.178202795" lastFinishedPulling="2025-11-28 06:26:58.114746275 +0000 UTC m=+340.704001845" observedRunningTime="2025-11-28 06:26:58.658822018 +0000 UTC m=+341.248077598" watchObservedRunningTime="2025-11-28 06:26:58.667396587 +0000 UTC m=+341.256652167" Nov 28 06:26:58 crc kubenswrapper[4955]: I1128 06:26:58.997857 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg"] Nov 28 06:26:59 crc kubenswrapper[4955]: I1128 06:26:59.651554 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" event={"ID":"cf48e98b-1d77-4ff1-9c2b-248e988967cb","Type":"ContainerStarted","Data":"351b081b082902bf95c1c2a6d107d8e3dc751860cca579b13bbdcc08c99b5ce7"} Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.659026 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerStarted","Data":"1b02acdf853ac9f2ef76b7dc4ac61f5c95a999233405abf8e95d070de41bf3a1"} Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.660390 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" event={"ID":"cf48e98b-1d77-4ff1-9c2b-248e988967cb","Type":"ContainerStarted","Data":"a2597a23d8ed1303b9c2acfeca10ec7ce200dfa183965a500c34da8b46758944"} Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.660579 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.698937 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4q2nl" podStartSLOduration=4.194164487 podStartE2EDuration="6.698918534s" podCreationTimestamp="2025-11-28 06:26:54 +0000 UTC" firstStartedPulling="2025-11-28 06:26:56.6070571 +0000 UTC m=+339.196312670" lastFinishedPulling="2025-11-28 06:26:59.111811147 +0000 UTC m=+341.701066717" observedRunningTime="2025-11-28 06:27:00.683692938 +0000 UTC m=+343.272948508" watchObservedRunningTime="2025-11-28 06:27:00.698918534 +0000 UTC m=+343.288174094" Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.699734 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" podStartSLOduration=4.6997292040000005 podStartE2EDuration="4.699729204s" podCreationTimestamp="2025-11-28 06:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:27:00.699696582 +0000 UTC m=+343.288952162" watchObservedRunningTime="2025-11-28 06:27:00.699729204 +0000 UTC m=+343.288984774" Nov 28 06:27:00 crc kubenswrapper[4955]: I1128 06:27:00.886593 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b9979494b-gpddg" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.507710 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.508776 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.551659 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.715934 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.716002 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.729912 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:27:02 crc kubenswrapper[4955]: I1128 06:27:02.781608 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:27:03 crc kubenswrapper[4955]: I1128 06:27:03.741676 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5kxmn" Nov 28 06:27:04 crc kubenswrapper[4955]: I1128 06:27:04.912757 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:27:04 crc kubenswrapper[4955]: I1128 06:27:04.913126 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:27:04 crc kubenswrapper[4955]: I1128 06:27:04.956945 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:27:05 crc kubenswrapper[4955]: I1128 06:27:05.137640 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:27:05 crc kubenswrapper[4955]: I1128 06:27:05.137891 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:27:05 crc kubenswrapper[4955]: I1128 06:27:05.190126 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:27:05 crc kubenswrapper[4955]: I1128 06:27:05.730154 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tqdwm" Nov 28 06:27:05 crc kubenswrapper[4955]: I1128 06:27:05.733740 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.321792 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9bshb"] Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.331006 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.352043 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9bshb"] Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.378216 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75283df1-fcd5-4384-b310-2527fcd289ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.378622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-bound-sa-token\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.378861 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-registry-tls\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.378970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-626xm\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-kube-api-access-626xm\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.379178 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.379432 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-registry-certificates\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.379548 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75283df1-fcd5-4384-b310-2527fcd289ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.379655 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-trusted-ca\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.398082 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481361 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75283df1-fcd5-4384-b310-2527fcd289ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481413 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-bound-sa-token\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481445 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-registry-tls\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481474 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-626xm\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-kube-api-access-626xm\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481541 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-registry-certificates\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481565 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75283df1-fcd5-4384-b310-2527fcd289ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.481586 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-trusted-ca\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.482015 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75283df1-fcd5-4384-b310-2527fcd289ba-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.482858 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-trusted-ca\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.483347 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75283df1-fcd5-4384-b310-2527fcd289ba-registry-certificates\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.488755 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-registry-tls\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.499026 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75283df1-fcd5-4384-b310-2527fcd289ba-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.501001 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-626xm\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-kube-api-access-626xm\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.501562 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75283df1-fcd5-4384-b310-2527fcd289ba-bound-sa-token\") pod \"image-registry-66df7c8f76-9bshb\" (UID: \"75283df1-fcd5-4384-b310-2527fcd289ba\") " pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:16 crc kubenswrapper[4955]: I1128 06:27:16.648715 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:17 crc kubenswrapper[4955]: I1128 06:27:17.167796 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9bshb"] Nov 28 06:27:17 crc kubenswrapper[4955]: W1128 06:27:17.177363 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75283df1_fcd5_4384_b310_2527fcd289ba.slice/crio-72383ece50b56dfc03d0d88495518098b6001551f6aff832945d58d0a3d53579 WatchSource:0}: Error finding container 72383ece50b56dfc03d0d88495518098b6001551f6aff832945d58d0a3d53579: Status 404 returned error can't find the container with id 72383ece50b56dfc03d0d88495518098b6001551f6aff832945d58d0a3d53579 Nov 28 06:27:17 crc kubenswrapper[4955]: I1128 06:27:17.785296 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" event={"ID":"75283df1-fcd5-4384-b310-2527fcd289ba","Type":"ContainerStarted","Data":"ec0d1839d14dca7a1959734ea8a418b4098e8ca0ab6ebcbf52ae2ff082f939d1"} Nov 28 06:27:17 crc kubenswrapper[4955]: I1128 06:27:17.785786 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" event={"ID":"75283df1-fcd5-4384-b310-2527fcd289ba","Type":"ContainerStarted","Data":"72383ece50b56dfc03d0d88495518098b6001551f6aff832945d58d0a3d53579"} Nov 28 06:27:17 crc kubenswrapper[4955]: I1128 06:27:17.785819 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:17 crc kubenswrapper[4955]: I1128 06:27:17.802856 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" podStartSLOduration=1.802834871 podStartE2EDuration="1.802834871s" podCreationTimestamp="2025-11-28 06:27:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:27:17.801056885 +0000 UTC m=+360.390312545" watchObservedRunningTime="2025-11-28 06:27:17.802834871 +0000 UTC m=+360.392090481" Nov 28 06:27:23 crc kubenswrapper[4955]: I1128 06:27:23.393498 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:27:23 crc kubenswrapper[4955]: I1128 06:27:23.394020 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:27:36 crc kubenswrapper[4955]: I1128 06:27:36.655322 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9bshb" Nov 28 06:27:36 crc kubenswrapper[4955]: I1128 06:27:36.727525 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:27:53 crc kubenswrapper[4955]: I1128 06:27:53.393688 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:27:53 crc kubenswrapper[4955]: I1128 06:27:53.394375 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:27:53 crc kubenswrapper[4955]: I1128 06:27:53.394487 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:27:53 crc kubenswrapper[4955]: I1128 06:27:53.396428 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:27:53 crc kubenswrapper[4955]: I1128 06:27:53.396603 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4" gracePeriod=600 Nov 28 06:27:54 crc kubenswrapper[4955]: I1128 06:27:54.043141 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4" exitCode=0 Nov 28 06:27:54 crc kubenswrapper[4955]: I1128 06:27:54.043382 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4"} Nov 28 06:27:54 crc kubenswrapper[4955]: I1128 06:27:54.043581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662"} Nov 28 06:27:54 crc kubenswrapper[4955]: I1128 06:27:54.043609 4955 scope.go:117] "RemoveContainer" containerID="fd708da93b935b55874da73fac4d746d13763e6f905f20e7be5f67573c8e4d2f" Nov 28 06:28:01 crc kubenswrapper[4955]: I1128 06:28:01.778168 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" podUID="89f41960-5178-4dcf-adaa-823b323397d5" containerName="registry" containerID="cri-o://15f5a26a93ff663bf0aa49b36bc8c13a37403a476538890d35e2cdc00bbd3870" gracePeriod=30 Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.105803 4955 generic.go:334] "Generic (PLEG): container finished" podID="89f41960-5178-4dcf-adaa-823b323397d5" containerID="15f5a26a93ff663bf0aa49b36bc8c13a37403a476538890d35e2cdc00bbd3870" exitCode=0 Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.105890 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" event={"ID":"89f41960-5178-4dcf-adaa-823b323397d5","Type":"ContainerDied","Data":"15f5a26a93ff663bf0aa49b36bc8c13a37403a476538890d35e2cdc00bbd3870"} Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.240814 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.286964 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287020 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287077 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287135 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr9sn\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287189 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287217 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287350 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.287399 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates\") pod \"89f41960-5178-4dcf-adaa-823b323397d5\" (UID: \"89f41960-5178-4dcf-adaa-823b323397d5\") " Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.288338 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.288557 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.294717 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.295115 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.297177 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.303812 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn" (OuterVolumeSpecName: "kube-api-access-mr9sn") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "kube-api-access-mr9sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.305722 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.315867 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "89f41960-5178-4dcf-adaa-823b323397d5" (UID: "89f41960-5178-4dcf-adaa-823b323397d5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389228 4955 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/89f41960-5178-4dcf-adaa-823b323397d5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389267 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr9sn\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-kube-api-access-mr9sn\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389280 4955 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389294 4955 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/89f41960-5178-4dcf-adaa-823b323397d5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389308 4955 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389318 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89f41960-5178-4dcf-adaa-823b323397d5-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:02 crc kubenswrapper[4955]: I1128 06:28:02.389329 4955 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/89f41960-5178-4dcf-adaa-823b323397d5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.114385 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" event={"ID":"89f41960-5178-4dcf-adaa-823b323397d5","Type":"ContainerDied","Data":"c705b0303688e8087461f2283996e9623c639697ae5fc8df0a80d214e620ca36"} Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.114787 4955 scope.go:117] "RemoveContainer" containerID="15f5a26a93ff663bf0aa49b36bc8c13a37403a476538890d35e2cdc00bbd3870" Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.114570 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4lj8g" Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.160880 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.167930 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4lj8g"] Nov 28 06:28:03 crc kubenswrapper[4955]: I1128 06:28:03.716402 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89f41960-5178-4dcf-adaa-823b323397d5" path="/var/lib/kubelet/pods/89f41960-5178-4dcf-adaa-823b323397d5/volumes" Nov 28 06:29:53 crc kubenswrapper[4955]: I1128 06:29:53.392743 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:29:53 crc kubenswrapper[4955]: I1128 06:29:53.393355 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.183048 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt"] Nov 28 06:30:00 crc kubenswrapper[4955]: E1128 06:30:00.183589 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89f41960-5178-4dcf-adaa-823b323397d5" containerName="registry" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.183609 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="89f41960-5178-4dcf-adaa-823b323397d5" containerName="registry" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.183736 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="89f41960-5178-4dcf-adaa-823b323397d5" containerName="registry" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.184181 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.185944 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.186053 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.198926 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt"] Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.307098 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvtg\" (UniqueName: \"kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.307272 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.307374 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.408786 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.408928 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.409024 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvtg\" (UniqueName: \"kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.410326 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.418999 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.432638 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvtg\" (UniqueName: \"kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg\") pod \"collect-profiles-29405190-fvtvt\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.517446 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.768284 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt"] Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.994866 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" event={"ID":"a4fa1542-f019-497c-bda9-8e389b823683","Type":"ContainerStarted","Data":"11b3c98b0d8d9c62ac27721527ee585a93cc193ddc08c69476a20d0a3dbcb4c2"} Nov 28 06:30:00 crc kubenswrapper[4955]: I1128 06:30:00.994945 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" event={"ID":"a4fa1542-f019-497c-bda9-8e389b823683","Type":"ContainerStarted","Data":"c6308883c5477abb91e438030deef8011b5591f76c909c8c7aec024f224bdf16"} Nov 28 06:30:01 crc kubenswrapper[4955]: I1128 06:30:01.021489 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" podStartSLOduration=1.021463937 podStartE2EDuration="1.021463937s" podCreationTimestamp="2025-11-28 06:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:30:01.020386386 +0000 UTC m=+523.609641996" watchObservedRunningTime="2025-11-28 06:30:01.021463937 +0000 UTC m=+523.610719547" Nov 28 06:30:02 crc kubenswrapper[4955]: I1128 06:30:02.004475 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4fa1542-f019-497c-bda9-8e389b823683" containerID="11b3c98b0d8d9c62ac27721527ee585a93cc193ddc08c69476a20d0a3dbcb4c2" exitCode=0 Nov 28 06:30:02 crc kubenswrapper[4955]: I1128 06:30:02.004579 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" event={"ID":"a4fa1542-f019-497c-bda9-8e389b823683","Type":"ContainerDied","Data":"11b3c98b0d8d9c62ac27721527ee585a93cc193ddc08c69476a20d0a3dbcb4c2"} Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.267258 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.455741 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume\") pod \"a4fa1542-f019-497c-bda9-8e389b823683\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.456844 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvtg\" (UniqueName: \"kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg\") pod \"a4fa1542-f019-497c-bda9-8e389b823683\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.456934 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4fa1542-f019-497c-bda9-8e389b823683" (UID: "a4fa1542-f019-497c-bda9-8e389b823683"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.456953 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume\") pod \"a4fa1542-f019-497c-bda9-8e389b823683\" (UID: \"a4fa1542-f019-497c-bda9-8e389b823683\") " Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.457444 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fa1542-f019-497c-bda9-8e389b823683-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.465127 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg" (OuterVolumeSpecName: "kube-api-access-8zvtg") pod "a4fa1542-f019-497c-bda9-8e389b823683" (UID: "a4fa1542-f019-497c-bda9-8e389b823683"). InnerVolumeSpecName "kube-api-access-8zvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.465286 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4fa1542-f019-497c-bda9-8e389b823683" (UID: "a4fa1542-f019-497c-bda9-8e389b823683"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.559062 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvtg\" (UniqueName: \"kubernetes.io/projected/a4fa1542-f019-497c-bda9-8e389b823683-kube-api-access-8zvtg\") on node \"crc\" DevicePath \"\"" Nov 28 06:30:03 crc kubenswrapper[4955]: I1128 06:30:03.559151 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4fa1542-f019-497c-bda9-8e389b823683-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:30:04 crc kubenswrapper[4955]: I1128 06:30:04.026274 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" event={"ID":"a4fa1542-f019-497c-bda9-8e389b823683","Type":"ContainerDied","Data":"c6308883c5477abb91e438030deef8011b5591f76c909c8c7aec024f224bdf16"} Nov 28 06:30:04 crc kubenswrapper[4955]: I1128 06:30:04.026738 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt" Nov 28 06:30:04 crc kubenswrapper[4955]: I1128 06:30:04.026776 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6308883c5477abb91e438030deef8011b5591f76c909c8c7aec024f224bdf16" Nov 28 06:30:23 crc kubenswrapper[4955]: I1128 06:30:23.393537 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:30:23 crc kubenswrapper[4955]: I1128 06:30:23.394376 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:30:53 crc kubenswrapper[4955]: I1128 06:30:53.393538 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:30:53 crc kubenswrapper[4955]: I1128 06:30:53.394195 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:30:53 crc kubenswrapper[4955]: I1128 06:30:53.394276 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:30:53 crc kubenswrapper[4955]: I1128 06:30:53.395215 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:30:53 crc kubenswrapper[4955]: I1128 06:30:53.395537 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662" gracePeriod=600 Nov 28 06:30:54 crc kubenswrapper[4955]: I1128 06:30:54.358280 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662" exitCode=0 Nov 28 06:30:54 crc kubenswrapper[4955]: I1128 06:30:54.358319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662"} Nov 28 06:30:54 crc kubenswrapper[4955]: I1128 06:30:54.358643 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0"} Nov 28 06:30:54 crc kubenswrapper[4955]: I1128 06:30:54.358663 4955 scope.go:117] "RemoveContainer" containerID="3ab4c45c04e143ac7aa6b50cb9f45e7068559fdf751a815d8b1521f9ea24b7a4" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.801020 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4tlhd"] Nov 28 06:32:00 crc kubenswrapper[4955]: E1128 06:32:00.801905 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4fa1542-f019-497c-bda9-8e389b823683" containerName="collect-profiles" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.801925 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4fa1542-f019-497c-bda9-8e389b823683" containerName="collect-profiles" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.802099 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4fa1542-f019-497c-bda9-8e389b823683" containerName="collect-profiles" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.802723 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.805364 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hq4gv" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.805492 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.806415 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.811115 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rkw98"] Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.811956 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rkw98" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.814462 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-hpt8c" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.816019 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4tlhd"] Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.821434 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rkw98"] Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.843367 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-p27v9"] Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.845011 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.847718 4955 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-bkqkw" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.852744 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-p27v9"] Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.899201 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqccf\" (UniqueName: \"kubernetes.io/projected/e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2-kube-api-access-fqccf\") pod \"cert-manager-cainjector-7f985d654d-4tlhd\" (UID: \"e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.899558 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chl4\" (UniqueName: \"kubernetes.io/projected/501578cc-adbd-424b-be8d-6bc4ea59655e-kube-api-access-7chl4\") pod \"cert-manager-webhook-5655c58dd6-p27v9\" (UID: \"501578cc-adbd-424b-be8d-6bc4ea59655e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:00 crc kubenswrapper[4955]: I1128 06:32:00.899603 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v8j\" (UniqueName: \"kubernetes.io/projected/e9ab5ef6-2183-4170-87c6-5704f80d6073-kube-api-access-t4v8j\") pod \"cert-manager-5b446d88c5-rkw98\" (UID: \"e9ab5ef6-2183-4170-87c6-5704f80d6073\") " pod="cert-manager/cert-manager-5b446d88c5-rkw98" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.001079 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqccf\" (UniqueName: \"kubernetes.io/projected/e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2-kube-api-access-fqccf\") pod \"cert-manager-cainjector-7f985d654d-4tlhd\" (UID: \"e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.001179 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chl4\" (UniqueName: \"kubernetes.io/projected/501578cc-adbd-424b-be8d-6bc4ea59655e-kube-api-access-7chl4\") pod \"cert-manager-webhook-5655c58dd6-p27v9\" (UID: \"501578cc-adbd-424b-be8d-6bc4ea59655e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.001225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4v8j\" (UniqueName: \"kubernetes.io/projected/e9ab5ef6-2183-4170-87c6-5704f80d6073-kube-api-access-t4v8j\") pod \"cert-manager-5b446d88c5-rkw98\" (UID: \"e9ab5ef6-2183-4170-87c6-5704f80d6073\") " pod="cert-manager/cert-manager-5b446d88c5-rkw98" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.020078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4v8j\" (UniqueName: \"kubernetes.io/projected/e9ab5ef6-2183-4170-87c6-5704f80d6073-kube-api-access-t4v8j\") pod \"cert-manager-5b446d88c5-rkw98\" (UID: \"e9ab5ef6-2183-4170-87c6-5704f80d6073\") " pod="cert-manager/cert-manager-5b446d88c5-rkw98" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.020256 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chl4\" (UniqueName: \"kubernetes.io/projected/501578cc-adbd-424b-be8d-6bc4ea59655e-kube-api-access-7chl4\") pod \"cert-manager-webhook-5655c58dd6-p27v9\" (UID: \"501578cc-adbd-424b-be8d-6bc4ea59655e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.024098 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqccf\" (UniqueName: \"kubernetes.io/projected/e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2-kube-api-access-fqccf\") pod \"cert-manager-cainjector-7f985d654d-4tlhd\" (UID: \"e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.129082 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.144781 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rkw98" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.166288 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.377467 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-4tlhd"] Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.387880 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.637281 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-p27v9"] Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.645044 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rkw98"] Nov 28 06:32:01 crc kubenswrapper[4955]: W1128 06:32:01.645379 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod501578cc_adbd_424b_be8d_6bc4ea59655e.slice/crio-35fed9aee393236db681cf7acddfe40303ce4edba555084d7450744fd16c26c8 WatchSource:0}: Error finding container 35fed9aee393236db681cf7acddfe40303ce4edba555084d7450744fd16c26c8: Status 404 returned error can't find the container with id 35fed9aee393236db681cf7acddfe40303ce4edba555084d7450744fd16c26c8 Nov 28 06:32:01 crc kubenswrapper[4955]: W1128 06:32:01.653675 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9ab5ef6_2183_4170_87c6_5704f80d6073.slice/crio-fbc6f6b4f3e08d4ec07ab9db40ed36a74f795042561abafe6982ec51c584957c WatchSource:0}: Error finding container fbc6f6b4f3e08d4ec07ab9db40ed36a74f795042561abafe6982ec51c584957c: Status 404 returned error can't find the container with id fbc6f6b4f3e08d4ec07ab9db40ed36a74f795042561abafe6982ec51c584957c Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.830427 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rkw98" event={"ID":"e9ab5ef6-2183-4170-87c6-5704f80d6073","Type":"ContainerStarted","Data":"fbc6f6b4f3e08d4ec07ab9db40ed36a74f795042561abafe6982ec51c584957c"} Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.832047 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" event={"ID":"501578cc-adbd-424b-be8d-6bc4ea59655e","Type":"ContainerStarted","Data":"35fed9aee393236db681cf7acddfe40303ce4edba555084d7450744fd16c26c8"} Nov 28 06:32:01 crc kubenswrapper[4955]: I1128 06:32:01.833584 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" event={"ID":"e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2","Type":"ContainerStarted","Data":"3b5fdfcedd8e058abe38a702f10d6f9a14016178ea3cab1ae642c42ddf854e2a"} Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.849491 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" event={"ID":"501578cc-adbd-424b-be8d-6bc4ea59655e","Type":"ContainerStarted","Data":"dab1db0904d0b73b577a6cea9efb59891f537ffbb884c170fd8ddfa619aa43df"} Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.850060 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.850530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" event={"ID":"e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2","Type":"ContainerStarted","Data":"4ca564b30270da4ff5c113ea84d4574b18ccfe5e4a2f98a0db4d76009e46b346"} Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.851428 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rkw98" event={"ID":"e9ab5ef6-2183-4170-87c6-5704f80d6073","Type":"ContainerStarted","Data":"31c859163b3d91067d2c6acc0f3384f3fdf46fb72908b15e1c17d5525f9590d3"} Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.863476 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" podStartSLOduration=2.024307696 podStartE2EDuration="4.863458524s" podCreationTimestamp="2025-11-28 06:32:00 +0000 UTC" firstStartedPulling="2025-11-28 06:32:01.647744246 +0000 UTC m=+644.236999816" lastFinishedPulling="2025-11-28 06:32:04.486895064 +0000 UTC m=+647.076150644" observedRunningTime="2025-11-28 06:32:04.860982954 +0000 UTC m=+647.450238524" watchObservedRunningTime="2025-11-28 06:32:04.863458524 +0000 UTC m=+647.452714094" Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.878933 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rkw98" podStartSLOduration=2.093968261 podStartE2EDuration="4.87891712s" podCreationTimestamp="2025-11-28 06:32:00 +0000 UTC" firstStartedPulling="2025-11-28 06:32:01.656445862 +0000 UTC m=+644.245701432" lastFinishedPulling="2025-11-28 06:32:04.441394721 +0000 UTC m=+647.030650291" observedRunningTime="2025-11-28 06:32:04.876202743 +0000 UTC m=+647.465458313" watchObservedRunningTime="2025-11-28 06:32:04.87891712 +0000 UTC m=+647.468172690" Nov 28 06:32:04 crc kubenswrapper[4955]: I1128 06:32:04.895477 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-4tlhd" podStartSLOduration=1.848781886 podStartE2EDuration="4.895459086s" podCreationTimestamp="2025-11-28 06:32:00 +0000 UTC" firstStartedPulling="2025-11-28 06:32:01.387627481 +0000 UTC m=+643.976883051" lastFinishedPulling="2025-11-28 06:32:04.434304681 +0000 UTC m=+647.023560251" observedRunningTime="2025-11-28 06:32:04.891610738 +0000 UTC m=+647.480866318" watchObservedRunningTime="2025-11-28 06:32:04.895459086 +0000 UTC m=+647.484714656" Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.170791 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-p27v9" Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580000 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tj8bb"] Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580672 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-controller" containerID="cri-o://b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580714 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="northd" containerID="cri-o://7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580796 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580839 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-acl-logging" containerID="cri-o://a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580728 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="sbdb" containerID="cri-o://65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580865 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="nbdb" containerID="cri-o://8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.580981 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-node" containerID="cri-o://691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" gracePeriod=30 Nov 28 06:32:11 crc kubenswrapper[4955]: I1128 06:32:11.630184 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" containerID="cri-o://a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" gracePeriod=30 Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.561350 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.573059 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-acl-logging/0.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.573745 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-controller/0.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.574828 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" exitCode=0 Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.574918 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.577931 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/2.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.578433 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/1.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.578475 4955 generic.go:334] "Generic (PLEG): container finished" podID="765bbe56-be77-4d81-824f-ad16924029f4" containerID="7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd" exitCode=2 Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.578537 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerDied","Data":"7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd"} Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.578575 4955 scope.go:117] "RemoveContainer" containerID="a7d995452c4cdfa91b69b301a60a6205b8b3e615514feee0f4db1e773f5e7cb3" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.586017 4955 scope.go:117] "RemoveContainer" containerID="7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.586298 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dxhtm_openshift-multus(765bbe56-be77-4d81-824f-ad16924029f4)\"" pod="openshift-multus/multus-dxhtm" podUID="765bbe56-be77-4d81-824f-ad16924029f4" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.760089 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.762919 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-acl-logging/0.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.763452 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-controller/0.log" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.763925 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833038 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833089 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833110 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833148 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833175 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833199 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833225 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833243 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833282 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4xc\" (UniqueName: \"kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833301 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833328 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833351 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833374 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833394 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833408 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833425 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833436 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833452 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833467 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833479 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns\") pod \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\" (UID: \"9e192dfd-62ad-4870-b2fd-3c2a09006f6f\") " Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833706 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833738 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.833755 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.835742 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.835805 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.835809 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.835899 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.835938 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log" (OuterVolumeSpecName: "node-log") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836456 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836535 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836540 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836586 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash" (OuterVolumeSpecName: "host-slash") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836564 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket" (OuterVolumeSpecName: "log-socket") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836592 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836635 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836949 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.836960 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839488 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vzwqm"] Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839815 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-acl-logging" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839833 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-acl-logging" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839852 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="nbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839862 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="nbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839875 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839884 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839894 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kubecfg-setup" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839902 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kubecfg-setup" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839918 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="sbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839926 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="sbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839938 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839947 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839962 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839970 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839980 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-node" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.839988 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-node" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.839998 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="northd" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840007 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="northd" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.840022 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840030 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.840043 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840055 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.840067 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840078 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840221 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840238 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="nbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840251 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="northd" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840265 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840275 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840307 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840317 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840328 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="sbdb" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840341 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovn-acl-logging" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840351 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840362 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="kube-rbac-proxy-node" Nov 28 06:32:12 crc kubenswrapper[4955]: E1128 06:32:12.840480 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840491 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.840633 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerName="ovnkube-controller" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.842980 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.843271 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc" (OuterVolumeSpecName: "kube-api-access-lt4xc") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "kube-api-access-lt4xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.845344 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.855106 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9e192dfd-62ad-4870-b2fd-3c2a09006f6f" (UID: "9e192dfd-62ad-4870-b2fd-3c2a09006f6f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934266 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-slash\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934322 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovn-node-metrics-cert\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934351 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-kubelet\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934416 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-netd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934437 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvs8\" (UniqueName: \"kubernetes.io/projected/ae2d7cee-1227-48a9-85b6-b2c7de007e97-kube-api-access-9lvs8\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934468 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-log-socket\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934570 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-etc-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934593 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-node-log\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934615 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-systemd-units\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-systemd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934759 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-script-lib\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934807 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-config\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934833 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934858 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934924 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-netns\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934960 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-env-overrides\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.934996 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-var-lib-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935027 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-bin\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935094 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-ovn\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935124 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935228 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935249 4955 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935264 4955 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935276 4955 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935291 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt4xc\" (UniqueName: \"kubernetes.io/projected/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-kube-api-access-lt4xc\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935303 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935315 4955 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935328 4955 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935339 4955 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935352 4955 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935363 4955 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935374 4955 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935386 4955 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935397 4955 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935409 4955 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935421 4955 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935432 4955 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935443 4955 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935453 4955 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:12 crc kubenswrapper[4955]: I1128 06:32:12.935465 4955 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9e192dfd-62ad-4870-b2fd-3c2a09006f6f-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036378 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-log-socket\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036470 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-etc-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036491 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-node-log\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036542 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-systemd-units\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036560 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-systemd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036495 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-log-socket\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036583 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-script-lib\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036629 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-systemd-units\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036641 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-systemd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036657 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036636 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036640 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-node-log\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-config\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036715 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036604 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-etc-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036748 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-netns\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036759 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036771 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-env-overrides\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036783 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-netns\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036795 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-var-lib-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036816 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-bin\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-ovn\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036873 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-slash\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036935 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovn-node-metrics-cert\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036958 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-kubelet\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.036980 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-netd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037000 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvs8\" (UniqueName: \"kubernetes.io/projected/ae2d7cee-1227-48a9-85b6-b2c7de007e97-kube-api-access-9lvs8\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037281 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-env-overrides\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037317 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-run-ovn-kubernetes\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037350 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-var-lib-openvswitch\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-bin\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037397 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-slash\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037411 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-run-ovn\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037442 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-kubelet\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037452 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae2d7cee-1227-48a9-85b6-b2c7de007e97-host-cni-netd\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.037936 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-config\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.038286 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovnkube-script-lib\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.042533 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ae2d7cee-1227-48a9-85b6-b2c7de007e97-ovn-node-metrics-cert\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.057755 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvs8\" (UniqueName: \"kubernetes.io/projected/ae2d7cee-1227-48a9-85b6-b2c7de007e97-kube-api-access-9lvs8\") pod \"ovnkube-node-vzwqm\" (UID: \"ae2d7cee-1227-48a9-85b6-b2c7de007e97\") " pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.186029 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.587068 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovnkube-controller/3.log" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.590371 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-acl-logging/0.log" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591125 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-tj8bb_9e192dfd-62ad-4870-b2fd-3c2a09006f6f/ovn-controller/0.log" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591659 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591721 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591777 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591807 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591788 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.591742 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592085 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592116 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592136 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" exitCode=143 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592161 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" exitCode=143 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592252 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592295 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592319 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592339 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592356 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592368 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592379 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592390 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592401 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592411 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592422 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592433 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592449 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592465 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592477 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592489 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592499 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592542 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592553 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592563 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592574 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592584 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592595 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592612 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tj8bb" event={"ID":"9e192dfd-62ad-4870-b2fd-3c2a09006f6f","Type":"ContainerDied","Data":"8d6b5060096695cdf1c12005a6cf7e007be5281f79f93f7b091eefe614524e33"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592631 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592644 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592655 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592665 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592676 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592686 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592696 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592707 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592717 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.592729 4955 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.594939 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/2.log" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.600412 4955 generic.go:334] "Generic (PLEG): container finished" podID="ae2d7cee-1227-48a9-85b6-b2c7de007e97" containerID="64963ffc81b2f9f1356ab440c24a4f6e51d789b8a3f40421875fc0f71fc32aa5" exitCode=0 Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.600476 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerDied","Data":"64963ffc81b2f9f1356ab440c24a4f6e51d789b8a3f40421875fc0f71fc32aa5"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.600535 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"80f500ecaaeaa4bd323efeb1221d5b08bfa4d6a23e5bb8f3e2f7cc18afd9a5b4"} Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.659085 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.693885 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tj8bb"] Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.700453 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tj8bb"] Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.724956 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e192dfd-62ad-4870-b2fd-3c2a09006f6f" path="/var/lib/kubelet/pods/9e192dfd-62ad-4870-b2fd-3c2a09006f6f/volumes" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.732099 4955 scope.go:117] "RemoveContainer" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.763416 4955 scope.go:117] "RemoveContainer" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.776885 4955 scope.go:117] "RemoveContainer" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.788301 4955 scope.go:117] "RemoveContainer" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.798831 4955 scope.go:117] "RemoveContainer" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.813242 4955 scope.go:117] "RemoveContainer" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.830899 4955 scope.go:117] "RemoveContainer" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.846908 4955 scope.go:117] "RemoveContainer" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.879749 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.880233 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.880269 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} err="failed to get container status \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.880297 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.880838 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": container with ID starting with 8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86 not found: ID does not exist" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.880869 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} err="failed to get container status \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": rpc error: code = NotFound desc = could not find container \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": container with ID starting with 8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.880885 4955 scope.go:117] "RemoveContainer" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.881312 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": container with ID starting with 65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35 not found: ID does not exist" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.881359 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} err="failed to get container status \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": rpc error: code = NotFound desc = could not find container \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": container with ID starting with 65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.881400 4955 scope.go:117] "RemoveContainer" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.881720 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": container with ID starting with 8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661 not found: ID does not exist" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.881749 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} err="failed to get container status \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": rpc error: code = NotFound desc = could not find container \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": container with ID starting with 8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.881768 4955 scope.go:117] "RemoveContainer" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.882079 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": container with ID starting with 7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3 not found: ID does not exist" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.882145 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} err="failed to get container status \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": rpc error: code = NotFound desc = could not find container \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": container with ID starting with 7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.882200 4955 scope.go:117] "RemoveContainer" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.882648 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": container with ID starting with 90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae not found: ID does not exist" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.882682 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} err="failed to get container status \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": rpc error: code = NotFound desc = could not find container \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": container with ID starting with 90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.882702 4955 scope.go:117] "RemoveContainer" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.882966 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": container with ID starting with 691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9 not found: ID does not exist" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.882989 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} err="failed to get container status \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": rpc error: code = NotFound desc = could not find container \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": container with ID starting with 691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.883003 4955 scope.go:117] "RemoveContainer" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.883263 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": container with ID starting with a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb not found: ID does not exist" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.883288 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} err="failed to get container status \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": rpc error: code = NotFound desc = could not find container \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": container with ID starting with a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.883309 4955 scope.go:117] "RemoveContainer" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.883663 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": container with ID starting with b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d not found: ID does not exist" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.883694 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} err="failed to get container status \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": rpc error: code = NotFound desc = could not find container \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": container with ID starting with b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.883715 4955 scope.go:117] "RemoveContainer" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: E1128 06:32:13.884014 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": container with ID starting with 1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032 not found: ID does not exist" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884037 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} err="failed to get container status \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": rpc error: code = NotFound desc = could not find container \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": container with ID starting with 1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884056 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884421 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} err="failed to get container status \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884448 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884886 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} err="failed to get container status \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": rpc error: code = NotFound desc = could not find container \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": container with ID starting with 8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.884908 4955 scope.go:117] "RemoveContainer" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885211 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} err="failed to get container status \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": rpc error: code = NotFound desc = could not find container \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": container with ID starting with 65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885252 4955 scope.go:117] "RemoveContainer" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885573 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} err="failed to get container status \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": rpc error: code = NotFound desc = could not find container \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": container with ID starting with 8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885593 4955 scope.go:117] "RemoveContainer" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885875 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} err="failed to get container status \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": rpc error: code = NotFound desc = could not find container \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": container with ID starting with 7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.885901 4955 scope.go:117] "RemoveContainer" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886186 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} err="failed to get container status \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": rpc error: code = NotFound desc = could not find container \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": container with ID starting with 90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886210 4955 scope.go:117] "RemoveContainer" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886430 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} err="failed to get container status \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": rpc error: code = NotFound desc = could not find container \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": container with ID starting with 691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886461 4955 scope.go:117] "RemoveContainer" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886769 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} err="failed to get container status \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": rpc error: code = NotFound desc = could not find container \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": container with ID starting with a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.886800 4955 scope.go:117] "RemoveContainer" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887039 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} err="failed to get container status \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": rpc error: code = NotFound desc = could not find container \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": container with ID starting with b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887067 4955 scope.go:117] "RemoveContainer" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887428 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} err="failed to get container status \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": rpc error: code = NotFound desc = could not find container \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": container with ID starting with 1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887454 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887788 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} err="failed to get container status \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.887813 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888070 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} err="failed to get container status \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": rpc error: code = NotFound desc = could not find container \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": container with ID starting with 8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888093 4955 scope.go:117] "RemoveContainer" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888432 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} err="failed to get container status \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": rpc error: code = NotFound desc = could not find container \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": container with ID starting with 65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888455 4955 scope.go:117] "RemoveContainer" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888759 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} err="failed to get container status \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": rpc error: code = NotFound desc = could not find container \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": container with ID starting with 8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.888785 4955 scope.go:117] "RemoveContainer" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889109 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} err="failed to get container status \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": rpc error: code = NotFound desc = could not find container \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": container with ID starting with 7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889130 4955 scope.go:117] "RemoveContainer" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889371 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} err="failed to get container status \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": rpc error: code = NotFound desc = could not find container \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": container with ID starting with 90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889393 4955 scope.go:117] "RemoveContainer" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889651 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} err="failed to get container status \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": rpc error: code = NotFound desc = could not find container \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": container with ID starting with 691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889674 4955 scope.go:117] "RemoveContainer" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889930 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} err="failed to get container status \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": rpc error: code = NotFound desc = could not find container \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": container with ID starting with a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.889957 4955 scope.go:117] "RemoveContainer" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890261 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} err="failed to get container status \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": rpc error: code = NotFound desc = could not find container \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": container with ID starting with b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890285 4955 scope.go:117] "RemoveContainer" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890604 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} err="failed to get container status \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": rpc error: code = NotFound desc = could not find container \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": container with ID starting with 1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890628 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890897 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} err="failed to get container status \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.890922 4955 scope.go:117] "RemoveContainer" containerID="8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891164 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86"} err="failed to get container status \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": rpc error: code = NotFound desc = could not find container \"8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86\": container with ID starting with 8e05862b9659d05a906c80b2b0be62e74ae8f2534e0ba3431da041a1d5ef9b86 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891189 4955 scope.go:117] "RemoveContainer" containerID="65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891448 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35"} err="failed to get container status \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": rpc error: code = NotFound desc = could not find container \"65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35\": container with ID starting with 65dd411c7c093644ffc9b812a510fc8e45baae8104c087634b498404407a5a35 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891475 4955 scope.go:117] "RemoveContainer" containerID="8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891823 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661"} err="failed to get container status \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": rpc error: code = NotFound desc = could not find container \"8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661\": container with ID starting with 8c183ec2afe37fa9b6c37166bbb1e802479c2c46cf9d62881d99a3a9a61c2661 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.891848 4955 scope.go:117] "RemoveContainer" containerID="7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892140 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3"} err="failed to get container status \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": rpc error: code = NotFound desc = could not find container \"7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3\": container with ID starting with 7aa97e0de300f9ececbc6de637a4f39751f3373ce7e478dede8163c87f3f3ac3 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892160 4955 scope.go:117] "RemoveContainer" containerID="90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892491 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae"} err="failed to get container status \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": rpc error: code = NotFound desc = could not find container \"90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae\": container with ID starting with 90e20e819713bdd1a43ba5c99a2378f789ab3a5ff8e9200553599b3f40fb45ae not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892599 4955 scope.go:117] "RemoveContainer" containerID="691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892906 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9"} err="failed to get container status \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": rpc error: code = NotFound desc = could not find container \"691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9\": container with ID starting with 691dc029810906e61ed23b8dbc5ea56c58de6a40b58a911ebdfa2734e1a566e9 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.892929 4955 scope.go:117] "RemoveContainer" containerID="a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.893217 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb"} err="failed to get container status \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": rpc error: code = NotFound desc = could not find container \"a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb\": container with ID starting with a12b830f9726437372bcd4b1cffdca6e9d34e6df5fc9d4bf9a631b005d71ceeb not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.893247 4955 scope.go:117] "RemoveContainer" containerID="b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.893641 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d"} err="failed to get container status \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": rpc error: code = NotFound desc = could not find container \"b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d\": container with ID starting with b56d7f3364b50aebea4810d12916a4bd3457b7a68ed390c2eb00d0deb781a71d not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.893667 4955 scope.go:117] "RemoveContainer" containerID="1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.894012 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032"} err="failed to get container status \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": rpc error: code = NotFound desc = could not find container \"1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032\": container with ID starting with 1512a90ce62391245a155d2067a165c383d20c5b8a9ae5da838ee9ea404eb032 not found: ID does not exist" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.894036 4955 scope.go:117] "RemoveContainer" containerID="a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95" Nov 28 06:32:13 crc kubenswrapper[4955]: I1128 06:32:13.894323 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95"} err="failed to get container status \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": rpc error: code = NotFound desc = could not find container \"a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95\": container with ID starting with a46952daf897b37e1915dd1ab21a27361d96bf4f6031585cc52a253e1221fd95 not found: ID does not exist" Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.615383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"47a5dba4a6a17d9cb486ac8876590287ed5e78fbd8c461fbb1c5e6139d600a91"} Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.615925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"4d37be90c6b0b123cf711340946d0c8c879b46eb9762181f182825899470fbbe"} Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.615961 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"cab63ea8b29689ac0cd17ff2ccdc63083959061a5f394a2bbb9aedef25244f3c"} Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.615991 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"ca7536b52de067a58e02312a9709228e76e56ec0f90ebbfb312380f4e37e2313"} Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.616015 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"7df1fc4ccad02aaa0fd53e0097ed6a5861cc92861eeb36b08e45ebd0e1c3c735"} Nov 28 06:32:14 crc kubenswrapper[4955]: I1128 06:32:14.616040 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"1379b5e885bcc7171b27a744d35edaf3c219e500ea5e542f3981b03a55a72420"} Nov 28 06:32:17 crc kubenswrapper[4955]: I1128 06:32:17.645246 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"4ed55b5f0daf3682bbfcc4b041f5442b2eed352d61453651b983296b5b043fb8"} Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.695003 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" event={"ID":"ae2d7cee-1227-48a9-85b6-b2c7de007e97","Type":"ContainerStarted","Data":"8e61c5d0c42d6da9eab342cc721ac25925106bfcc95aec0244ba567ba9209732"} Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.695800 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.695910 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.695964 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.725321 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.734171 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:19 crc kubenswrapper[4955]: I1128 06:32:19.734689 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" podStartSLOduration=7.734652072 podStartE2EDuration="7.734652072s" podCreationTimestamp="2025-11-28 06:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:32:19.733167709 +0000 UTC m=+662.322423299" watchObservedRunningTime="2025-11-28 06:32:19.734652072 +0000 UTC m=+662.323907642" Nov 28 06:32:23 crc kubenswrapper[4955]: I1128 06:32:23.704657 4955 scope.go:117] "RemoveContainer" containerID="7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd" Nov 28 06:32:23 crc kubenswrapper[4955]: E1128 06:32:23.705345 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dxhtm_openshift-multus(765bbe56-be77-4d81-824f-ad16924029f4)\"" pod="openshift-multus/multus-dxhtm" podUID="765bbe56-be77-4d81-824f-ad16924029f4" Nov 28 06:32:35 crc kubenswrapper[4955]: I1128 06:32:35.704703 4955 scope.go:117] "RemoveContainer" containerID="7c6b876e6e1a692fae96efe82abb9434e1e29b377ae063e6a1a3abf80a90b3dd" Nov 28 06:32:36 crc kubenswrapper[4955]: I1128 06:32:36.816165 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dxhtm_765bbe56-be77-4d81-824f-ad16924029f4/kube-multus/2.log" Nov 28 06:32:36 crc kubenswrapper[4955]: I1128 06:32:36.816616 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dxhtm" event={"ID":"765bbe56-be77-4d81-824f-ad16924029f4","Type":"ContainerStarted","Data":"e8488e10dd983f6584ecc67d85fc142de1cd56f219eb7fc6ea28b0817fbae672"} Nov 28 06:32:43 crc kubenswrapper[4955]: I1128 06:32:43.211416 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vzwqm" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.563732 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr"] Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.565451 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.567192 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.576675 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr"] Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.668289 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgmp\" (UniqueName: \"kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.668649 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.668715 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.770003 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.770201 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgmp\" (UniqueName: \"kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.770263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.770942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.771126 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.794038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgmp\" (UniqueName: \"kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:50 crc kubenswrapper[4955]: I1128 06:32:50.895684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:51 crc kubenswrapper[4955]: I1128 06:32:51.125132 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr"] Nov 28 06:32:51 crc kubenswrapper[4955]: I1128 06:32:51.910130 4955 generic.go:334] "Generic (PLEG): container finished" podID="dcba2b58-4038-4b2a-879e-466b64878a49" containerID="ae79999dcfac60f59cec0d689f169d20dc839b79a9c321b8440ce725fedbc102" exitCode=0 Nov 28 06:32:51 crc kubenswrapper[4955]: I1128 06:32:51.910202 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" event={"ID":"dcba2b58-4038-4b2a-879e-466b64878a49","Type":"ContainerDied","Data":"ae79999dcfac60f59cec0d689f169d20dc839b79a9c321b8440ce725fedbc102"} Nov 28 06:32:51 crc kubenswrapper[4955]: I1128 06:32:51.910273 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" event={"ID":"dcba2b58-4038-4b2a-879e-466b64878a49","Type":"ContainerStarted","Data":"5623f07c6a26a694ebe18e317be603442b568e0efe391b207a6e52f0065b3eea"} Nov 28 06:32:53 crc kubenswrapper[4955]: I1128 06:32:53.393051 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:32:53 crc kubenswrapper[4955]: I1128 06:32:53.393455 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:32:53 crc kubenswrapper[4955]: I1128 06:32:53.925281 4955 generic.go:334] "Generic (PLEG): container finished" podID="dcba2b58-4038-4b2a-879e-466b64878a49" containerID="855cd2ef05af637250f4ac1ecac38bfdac299a36e825845be03d44097e7cc044" exitCode=0 Nov 28 06:32:53 crc kubenswrapper[4955]: I1128 06:32:53.925355 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" event={"ID":"dcba2b58-4038-4b2a-879e-466b64878a49","Type":"ContainerDied","Data":"855cd2ef05af637250f4ac1ecac38bfdac299a36e825845be03d44097e7cc044"} Nov 28 06:32:54 crc kubenswrapper[4955]: I1128 06:32:54.936177 4955 generic.go:334] "Generic (PLEG): container finished" podID="dcba2b58-4038-4b2a-879e-466b64878a49" containerID="6571da4af2cc6dd39204047ffdf1394083a1281b659b4de45b8ccb024cc45987" exitCode=0 Nov 28 06:32:54 crc kubenswrapper[4955]: I1128 06:32:54.936332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" event={"ID":"dcba2b58-4038-4b2a-879e-466b64878a49","Type":"ContainerDied","Data":"6571da4af2cc6dd39204047ffdf1394083a1281b659b4de45b8ccb024cc45987"} Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.217832 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.343488 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgmp\" (UniqueName: \"kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp\") pod \"dcba2b58-4038-4b2a-879e-466b64878a49\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.343569 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle\") pod \"dcba2b58-4038-4b2a-879e-466b64878a49\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.343613 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util\") pod \"dcba2b58-4038-4b2a-879e-466b64878a49\" (UID: \"dcba2b58-4038-4b2a-879e-466b64878a49\") " Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.344411 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle" (OuterVolumeSpecName: "bundle") pod "dcba2b58-4038-4b2a-879e-466b64878a49" (UID: "dcba2b58-4038-4b2a-879e-466b64878a49"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.355777 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp" (OuterVolumeSpecName: "kube-api-access-lxgmp") pod "dcba2b58-4038-4b2a-879e-466b64878a49" (UID: "dcba2b58-4038-4b2a-879e-466b64878a49"). InnerVolumeSpecName "kube-api-access-lxgmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.356627 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util" (OuterVolumeSpecName: "util") pod "dcba2b58-4038-4b2a-879e-466b64878a49" (UID: "dcba2b58-4038-4b2a-879e-466b64878a49"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.445188 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgmp\" (UniqueName: \"kubernetes.io/projected/dcba2b58-4038-4b2a-879e-466b64878a49-kube-api-access-lxgmp\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.445233 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.445249 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dcba2b58-4038-4b2a-879e-466b64878a49-util\") on node \"crc\" DevicePath \"\"" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.954642 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" event={"ID":"dcba2b58-4038-4b2a-879e-466b64878a49","Type":"ContainerDied","Data":"5623f07c6a26a694ebe18e317be603442b568e0efe391b207a6e52f0065b3eea"} Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.954694 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr" Nov 28 06:32:56 crc kubenswrapper[4955]: I1128 06:32:56.954697 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5623f07c6a26a694ebe18e317be603442b568e0efe391b207a6e52f0065b3eea" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.208208 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt"] Nov 28 06:32:59 crc kubenswrapper[4955]: E1128 06:32:59.209404 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="util" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.209526 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="util" Nov 28 06:32:59 crc kubenswrapper[4955]: E1128 06:32:59.209624 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="pull" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.209709 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="pull" Nov 28 06:32:59 crc kubenswrapper[4955]: E1128 06:32:59.209785 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="extract" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.209857 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="extract" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.210068 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcba2b58-4038-4b2a-879e-466b64878a49" containerName="extract" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.210618 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.212711 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xv7sr" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.213073 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.216137 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt"] Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.219978 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.280692 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48gr\" (UniqueName: \"kubernetes.io/projected/c810a1d4-a881-4a83-b9cc-853762f772ee-kube-api-access-j48gr\") pod \"nmstate-operator-5b5b58f5c8-8sdmt\" (UID: \"c810a1d4-a881-4a83-b9cc-853762f772ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.382589 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j48gr\" (UniqueName: \"kubernetes.io/projected/c810a1d4-a881-4a83-b9cc-853762f772ee-kube-api-access-j48gr\") pod \"nmstate-operator-5b5b58f5c8-8sdmt\" (UID: \"c810a1d4-a881-4a83-b9cc-853762f772ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.401131 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j48gr\" (UniqueName: \"kubernetes.io/projected/c810a1d4-a881-4a83-b9cc-853762f772ee-kube-api-access-j48gr\") pod \"nmstate-operator-5b5b58f5c8-8sdmt\" (UID: \"c810a1d4-a881-4a83-b9cc-853762f772ee\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.569943 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" Nov 28 06:32:59 crc kubenswrapper[4955]: I1128 06:32:59.976129 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt"] Nov 28 06:33:00 crc kubenswrapper[4955]: I1128 06:33:00.993633 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" event={"ID":"c810a1d4-a881-4a83-b9cc-853762f772ee","Type":"ContainerStarted","Data":"5e4ad7313e5b55e43f6ec89b1d35585ba1bfd9375b279ef72f035651467e1782"} Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.006301 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" event={"ID":"c810a1d4-a881-4a83-b9cc-853762f772ee","Type":"ContainerStarted","Data":"f7bf485f4c458f2226087c73c4067e65c11c1aea246d08ec6a6ccb36effe931e"} Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.039332 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8sdmt" podStartSLOduration=1.611776373 podStartE2EDuration="4.039305908s" podCreationTimestamp="2025-11-28 06:32:59 +0000 UTC" firstStartedPulling="2025-11-28 06:32:59.992034528 +0000 UTC m=+702.581290138" lastFinishedPulling="2025-11-28 06:33:02.419564103 +0000 UTC m=+705.008819673" observedRunningTime="2025-11-28 06:33:03.027501708 +0000 UTC m=+705.616757348" watchObservedRunningTime="2025-11-28 06:33:03.039305908 +0000 UTC m=+705.628561528" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.927321 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sh662"] Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.928081 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.929952 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2fkmr" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.963278 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87"] Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.964791 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.967022 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.968893 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sh662"] Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.976385 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ztr67"] Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.977463 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:03 crc kubenswrapper[4955]: I1128 06:33:03.982193 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.063614 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.064261 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.065804 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-962qx" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.067518 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.071969 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.076830 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083380 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083406 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-nmstate-lock\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083429 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-ovs-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083484 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083533 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-dbus-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083562 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5shx\" (UniqueName: \"kubernetes.io/projected/68fae9ad-af93-49b7-a741-227d048c4ee4-kube-api-access-c5shx\") pod \"nmstate-metrics-7f946cbc9-sh662\" (UID: \"68fae9ad-af93-49b7-a741-227d048c4ee4\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083589 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7r26\" (UniqueName: \"kubernetes.io/projected/ab1b9bd2-d514-41c2-8315-b035a598caa9-kube-api-access-z7r26\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083607 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2k9\" (UniqueName: \"kubernetes.io/projected/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-kube-api-access-vl2k9\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdnhc\" (UniqueName: \"kubernetes.io/projected/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-kube-api-access-hdnhc\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.083695 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184485 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184568 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-dbus-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184610 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5shx\" (UniqueName: \"kubernetes.io/projected/68fae9ad-af93-49b7-a741-227d048c4ee4-kube-api-access-c5shx\") pod \"nmstate-metrics-7f946cbc9-sh662\" (UID: \"68fae9ad-af93-49b7-a741-227d048c4ee4\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184634 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7r26\" (UniqueName: \"kubernetes.io/projected/ab1b9bd2-d514-41c2-8315-b035a598caa9-kube-api-access-z7r26\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184660 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl2k9\" (UniqueName: \"kubernetes.io/projected/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-kube-api-access-vl2k9\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184681 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdnhc\" (UniqueName: \"kubernetes.io/projected/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-kube-api-access-hdnhc\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184770 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-nmstate-lock\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184803 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-ovs-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.184873 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-ovs-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.185270 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-dbus-socket\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.185425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ab1b9bd2-d514-41c2-8315-b035a598caa9-nmstate-lock\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: E1128 06:33:04.185540 4955 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 28 06:33:04 crc kubenswrapper[4955]: E1128 06:33:04.185590 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert podName:bf9f276f-14a8-47e1-9eff-7faf202c0ec3 nodeName:}" failed. No retries permitted until 2025-11-28 06:33:04.685572849 +0000 UTC m=+707.274828429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-6ksf2" (UID: "bf9f276f-14a8-47e1-9eff-7faf202c0ec3") : secret "plugin-serving-cert" not found Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.186338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.194231 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.208091 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7r26\" (UniqueName: \"kubernetes.io/projected/ab1b9bd2-d514-41c2-8315-b035a598caa9-kube-api-access-z7r26\") pod \"nmstate-handler-ztr67\" (UID: \"ab1b9bd2-d514-41c2-8315-b035a598caa9\") " pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.209719 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl2k9\" (UniqueName: \"kubernetes.io/projected/8d206e9b-c6d4-4274-acf0-c404fd13eeaf-kube-api-access-vl2k9\") pod \"nmstate-webhook-5f6d4c5ccb-8zt87\" (UID: \"8d206e9b-c6d4-4274-acf0-c404fd13eeaf\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.213446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdnhc\" (UniqueName: \"kubernetes.io/projected/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-kube-api-access-hdnhc\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.215357 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5shx\" (UniqueName: \"kubernetes.io/projected/68fae9ad-af93-49b7-a741-227d048c4ee4-kube-api-access-c5shx\") pod \"nmstate-metrics-7f946cbc9-sh662\" (UID: \"68fae9ad-af93-49b7-a741-227d048c4ee4\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.244819 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.268092 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56c74696c7-cmh48"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.268843 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.279903 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c74696c7-cmh48"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285284 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-trusted-ca-bundle\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-oauth-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285453 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mkb\" (UniqueName: \"kubernetes.io/projected/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-kube-api-access-47mkb\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285572 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285606 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-service-ca\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.285626 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-oauth-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.293539 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.320787 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.386635 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389133 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-service-ca\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389169 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-oauth-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389226 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-trusted-ca-bundle\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389245 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-oauth-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389263 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.389296 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mkb\" (UniqueName: \"kubernetes.io/projected/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-kube-api-access-47mkb\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.390064 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-service-ca\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.390660 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-oauth-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.391698 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.391762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-trusted-ca-bundle\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.392127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-oauth-config\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.392261 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-console-serving-cert\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.406827 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mkb\" (UniqueName: \"kubernetes.io/projected/bb46b3a0-e247-4f39-9ee3-75c0c499b6b6-kube-api-access-47mkb\") pod \"console-56c74696c7-cmh48\" (UID: \"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6\") " pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.514280 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.600764 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.689829 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-sh662"] Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.694920 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.700648 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf9f276f-14a8-47e1-9eff-7faf202c0ec3-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-6ksf2\" (UID: \"bf9f276f-14a8-47e1-9eff-7faf202c0ec3\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:04 crc kubenswrapper[4955]: I1128 06:33:04.979949 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" Nov 28 06:33:05 crc kubenswrapper[4955]: I1128 06:33:05.025115 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" event={"ID":"68fae9ad-af93-49b7-a741-227d048c4ee4","Type":"ContainerStarted","Data":"e77fa4dde9cb553766d4a6f8ff29c7f6debbb364550f7233843ef0f544acb8c6"} Nov 28 06:33:05 crc kubenswrapper[4955]: I1128 06:33:05.026672 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" event={"ID":"8d206e9b-c6d4-4274-acf0-c404fd13eeaf","Type":"ContainerStarted","Data":"46450b87cc15b815e69f396a3e6b1116f69475c5d3f01c990ababdc70c6bb089"} Nov 28 06:33:05 crc kubenswrapper[4955]: I1128 06:33:05.028148 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ztr67" event={"ID":"ab1b9bd2-d514-41c2-8315-b035a598caa9","Type":"ContainerStarted","Data":"f79a6071b134dfe9b8501c7f2541f72109edb40bc70416abc66f2034f845224a"} Nov 28 06:33:05 crc kubenswrapper[4955]: I1128 06:33:05.043480 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c74696c7-cmh48"] Nov 28 06:33:05 crc kubenswrapper[4955]: W1128 06:33:05.052450 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb46b3a0_e247_4f39_9ee3_75c0c499b6b6.slice/crio-e64c28da4769add7a41768f3dff4727f29269bfc5fda3f0cdbe682db77c09ffc WatchSource:0}: Error finding container e64c28da4769add7a41768f3dff4727f29269bfc5fda3f0cdbe682db77c09ffc: Status 404 returned error can't find the container with id e64c28da4769add7a41768f3dff4727f29269bfc5fda3f0cdbe682db77c09ffc Nov 28 06:33:05 crc kubenswrapper[4955]: I1128 06:33:05.245041 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2"] Nov 28 06:33:06 crc kubenswrapper[4955]: I1128 06:33:06.034532 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c74696c7-cmh48" event={"ID":"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6","Type":"ContainerStarted","Data":"4eb29c191275b118cba17d9112390e862a4f3ff385078dd1cf0321d48afa6552"} Nov 28 06:33:06 crc kubenswrapper[4955]: I1128 06:33:06.034574 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c74696c7-cmh48" event={"ID":"bb46b3a0-e247-4f39-9ee3-75c0c499b6b6","Type":"ContainerStarted","Data":"e64c28da4769add7a41768f3dff4727f29269bfc5fda3f0cdbe682db77c09ffc"} Nov 28 06:33:06 crc kubenswrapper[4955]: I1128 06:33:06.035780 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" event={"ID":"bf9f276f-14a8-47e1-9eff-7faf202c0ec3","Type":"ContainerStarted","Data":"e89142227a07abb62b98b26cf6c0a600ed3425e5ab299b934e1cd9f7a3017db4"} Nov 28 06:33:06 crc kubenswrapper[4955]: I1128 06:33:06.055897 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56c74696c7-cmh48" podStartSLOduration=2.055881263 podStartE2EDuration="2.055881263s" podCreationTimestamp="2025-11-28 06:33:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:33:06.055398059 +0000 UTC m=+708.644653659" watchObservedRunningTime="2025-11-28 06:33:06.055881263 +0000 UTC m=+708.645136823" Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.046147 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" event={"ID":"8d206e9b-c6d4-4274-acf0-c404fd13eeaf","Type":"ContainerStarted","Data":"52cb67358186143015d3d58f8494276c4277d51a77062e4993855fe66332cf07"} Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.047362 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.050740 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ztr67" event={"ID":"ab1b9bd2-d514-41c2-8315-b035a598caa9","Type":"ContainerStarted","Data":"feb9c0624df3a16eae78f5dcc66a0fff80c702b801d3853d4823ae16df27968d"} Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.050823 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.053562 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" event={"ID":"68fae9ad-af93-49b7-a741-227d048c4ee4","Type":"ContainerStarted","Data":"4fd94ba551f5b22bc5759eaf7a09664d6b3aef5c9bb6d900ee12e87821fd819e"} Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.129725 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" podStartSLOduration=1.9850291759999998 podStartE2EDuration="4.12849451s" podCreationTimestamp="2025-11-28 06:33:03 +0000 UTC" firstStartedPulling="2025-11-28 06:33:04.524820423 +0000 UTC m=+707.114075983" lastFinishedPulling="2025-11-28 06:33:06.668285727 +0000 UTC m=+709.257541317" observedRunningTime="2025-11-28 06:33:07.082227326 +0000 UTC m=+709.671482926" watchObservedRunningTime="2025-11-28 06:33:07.12849451 +0000 UTC m=+709.717750080" Nov 28 06:33:07 crc kubenswrapper[4955]: I1128 06:33:07.129840 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ztr67" podStartSLOduration=1.819496963 podStartE2EDuration="4.129836219s" podCreationTimestamp="2025-11-28 06:33:03 +0000 UTC" firstStartedPulling="2025-11-28 06:33:04.341870357 +0000 UTC m=+706.931125927" lastFinishedPulling="2025-11-28 06:33:06.652209603 +0000 UTC m=+709.241465183" observedRunningTime="2025-11-28 06:33:07.122513138 +0000 UTC m=+709.711768708" watchObservedRunningTime="2025-11-28 06:33:07.129836219 +0000 UTC m=+709.719091779" Nov 28 06:33:08 crc kubenswrapper[4955]: I1128 06:33:08.063759 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" event={"ID":"bf9f276f-14a8-47e1-9eff-7faf202c0ec3","Type":"ContainerStarted","Data":"dd40412e3424053db96f067544e92d6c972121d8d3fce46685143b2d578bca5a"} Nov 28 06:33:08 crc kubenswrapper[4955]: I1128 06:33:08.099824 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-6ksf2" podStartSLOduration=1.712819248 podStartE2EDuration="4.099736562s" podCreationTimestamp="2025-11-28 06:33:04 +0000 UTC" firstStartedPulling="2025-11-28 06:33:05.24689613 +0000 UTC m=+707.836151700" lastFinishedPulling="2025-11-28 06:33:07.633813444 +0000 UTC m=+710.223069014" observedRunningTime="2025-11-28 06:33:08.092144033 +0000 UTC m=+710.681399633" watchObservedRunningTime="2025-11-28 06:33:08.099736562 +0000 UTC m=+710.688992152" Nov 28 06:33:10 crc kubenswrapper[4955]: I1128 06:33:10.080903 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" event={"ID":"68fae9ad-af93-49b7-a741-227d048c4ee4","Type":"ContainerStarted","Data":"04a4cdef81ad86bb810c8548f0e677e7a36096e570592f45f54758741abb071b"} Nov 28 06:33:10 crc kubenswrapper[4955]: I1128 06:33:10.100023 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-sh662" podStartSLOduration=2.563612514 podStartE2EDuration="7.099994445s" podCreationTimestamp="2025-11-28 06:33:03 +0000 UTC" firstStartedPulling="2025-11-28 06:33:04.706532104 +0000 UTC m=+707.295787674" lastFinishedPulling="2025-11-28 06:33:09.242914035 +0000 UTC m=+711.832169605" observedRunningTime="2025-11-28 06:33:10.097758011 +0000 UTC m=+712.687013611" watchObservedRunningTime="2025-11-28 06:33:10.099994445 +0000 UTC m=+712.689250045" Nov 28 06:33:14 crc kubenswrapper[4955]: I1128 06:33:14.351961 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ztr67" Nov 28 06:33:14 crc kubenswrapper[4955]: I1128 06:33:14.601006 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:14 crc kubenswrapper[4955]: I1128 06:33:14.601066 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:14 crc kubenswrapper[4955]: I1128 06:33:14.608662 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:15 crc kubenswrapper[4955]: I1128 06:33:15.123197 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56c74696c7-cmh48" Nov 28 06:33:15 crc kubenswrapper[4955]: I1128 06:33:15.205002 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:33:23 crc kubenswrapper[4955]: I1128 06:33:23.393626 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:33:23 crc kubenswrapper[4955]: I1128 06:33:23.394538 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:33:24 crc kubenswrapper[4955]: I1128 06:33:24.301559 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-8zt87" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.747692 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999"] Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.749991 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.752194 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.768971 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999"] Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.885299 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2mx\" (UniqueName: \"kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.885369 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.885445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.986448 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2mx\" (UniqueName: \"kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.986848 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.987151 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.988047 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:37 crc kubenswrapper[4955]: I1128 06:33:37.988285 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:38 crc kubenswrapper[4955]: I1128 06:33:38.024405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2mx\" (UniqueName: \"kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:38 crc kubenswrapper[4955]: I1128 06:33:38.081238 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:38 crc kubenswrapper[4955]: I1128 06:33:38.302884 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999"] Nov 28 06:33:39 crc kubenswrapper[4955]: I1128 06:33:39.289030 4955 generic.go:334] "Generic (PLEG): container finished" podID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerID="1846306559e1b6a0af7e3dcb23ad4c9f6cb2c19b08e2392be040c409b8fddd26" exitCode=0 Nov 28 06:33:39 crc kubenswrapper[4955]: I1128 06:33:39.289499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" event={"ID":"7a10e06e-4190-4e64-a8de-3470d1277a4c","Type":"ContainerDied","Data":"1846306559e1b6a0af7e3dcb23ad4c9f6cb2c19b08e2392be040c409b8fddd26"} Nov 28 06:33:39 crc kubenswrapper[4955]: I1128 06:33:39.289620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" event={"ID":"7a10e06e-4190-4e64-a8de-3470d1277a4c","Type":"ContainerStarted","Data":"709f872edd0a9dc103c20336411c5a6874ebdd419ca313058cde3d6d8b7c1650"} Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.254544 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sxskz" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerName="console" containerID="cri-o://35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85" gracePeriod=15 Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.802804 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sxskz_71082a13-ea8e-4a1b-af7e-fa4c3d50b8af/console/0.log" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.803470 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935022 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935085 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935135 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935162 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935229 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935274 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.935321 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8s6x\" (UniqueName: \"kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x\") pod \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\" (UID: \"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af\") " Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.936033 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.936105 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca" (OuterVolumeSpecName: "service-ca") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.936176 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config" (OuterVolumeSpecName: "console-config") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.937034 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.941349 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.941770 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:33:40 crc kubenswrapper[4955]: I1128 06:33:40.941841 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x" (OuterVolumeSpecName: "kube-api-access-b8s6x") pod "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" (UID: "71082a13-ea8e-4a1b-af7e-fa4c3d50b8af"). InnerVolumeSpecName "kube-api-access-b8s6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036432 4955 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036724 4955 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036738 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8s6x\" (UniqueName: \"kubernetes.io/projected/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-kube-api-access-b8s6x\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036748 4955 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036756 4955 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036764 4955 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.036772 4955 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308092 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sxskz_71082a13-ea8e-4a1b-af7e-fa4c3d50b8af/console/0.log" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308170 4955 generic.go:334] "Generic (PLEG): container finished" podID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerID="35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85" exitCode=2 Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308257 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sxskz" event={"ID":"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af","Type":"ContainerDied","Data":"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85"} Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308326 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sxskz" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308352 4955 scope.go:117] "RemoveContainer" containerID="35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.308336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sxskz" event={"ID":"71082a13-ea8e-4a1b-af7e-fa4c3d50b8af","Type":"ContainerDied","Data":"791ddc5e2860620d41f1a3ae9081ed6fa949511939215432e19d512c094de13e"} Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.311058 4955 generic.go:334] "Generic (PLEG): container finished" podID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerID="ecbc9d69b6898fe1f49b10d478d8bab8222cc12b3c5e6015435ab62d18e41b6b" exitCode=0 Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.311101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" event={"ID":"7a10e06e-4190-4e64-a8de-3470d1277a4c","Type":"ContainerDied","Data":"ecbc9d69b6898fe1f49b10d478d8bab8222cc12b3c5e6015435ab62d18e41b6b"} Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.341249 4955 scope.go:117] "RemoveContainer" containerID="35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85" Nov 28 06:33:41 crc kubenswrapper[4955]: E1128 06:33:41.350602 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85\": container with ID starting with 35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85 not found: ID does not exist" containerID="35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.350647 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85"} err="failed to get container status \"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85\": rpc error: code = NotFound desc = could not find container \"35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85\": container with ID starting with 35a37749949280a5d896d3e7a258f268009d758d5121b9d6c3ca65353eb5da85 not found: ID does not exist" Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.377320 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.379842 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sxskz"] Nov 28 06:33:41 crc kubenswrapper[4955]: I1128 06:33:41.711061 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" path="/var/lib/kubelet/pods/71082a13-ea8e-4a1b-af7e-fa4c3d50b8af/volumes" Nov 28 06:33:42 crc kubenswrapper[4955]: I1128 06:33:42.322934 4955 generic.go:334] "Generic (PLEG): container finished" podID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerID="896017e072f920943a57b4621ab8e084259bd32eb93eeb0175fd6b52a15a8de3" exitCode=0 Nov 28 06:33:42 crc kubenswrapper[4955]: I1128 06:33:42.323002 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" event={"ID":"7a10e06e-4190-4e64-a8de-3470d1277a4c","Type":"ContainerDied","Data":"896017e072f920943a57b4621ab8e084259bd32eb93eeb0175fd6b52a15a8de3"} Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.659038 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.775388 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util\") pod \"7a10e06e-4190-4e64-a8de-3470d1277a4c\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.775665 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle\") pod \"7a10e06e-4190-4e64-a8de-3470d1277a4c\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.775741 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k2mx\" (UniqueName: \"kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx\") pod \"7a10e06e-4190-4e64-a8de-3470d1277a4c\" (UID: \"7a10e06e-4190-4e64-a8de-3470d1277a4c\") " Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.778386 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle" (OuterVolumeSpecName: "bundle") pod "7a10e06e-4190-4e64-a8de-3470d1277a4c" (UID: "7a10e06e-4190-4e64-a8de-3470d1277a4c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.783655 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx" (OuterVolumeSpecName: "kube-api-access-8k2mx") pod "7a10e06e-4190-4e64-a8de-3470d1277a4c" (UID: "7a10e06e-4190-4e64-a8de-3470d1277a4c"). InnerVolumeSpecName "kube-api-access-8k2mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.805468 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util" (OuterVolumeSpecName: "util") pod "7a10e06e-4190-4e64-a8de-3470d1277a4c" (UID: "7a10e06e-4190-4e64-a8de-3470d1277a4c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.879026 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-util\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.879100 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a10e06e-4190-4e64-a8de-3470d1277a4c-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:43 crc kubenswrapper[4955]: I1128 06:33:43.879128 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k2mx\" (UniqueName: \"kubernetes.io/projected/7a10e06e-4190-4e64-a8de-3470d1277a4c-kube-api-access-8k2mx\") on node \"crc\" DevicePath \"\"" Nov 28 06:33:44 crc kubenswrapper[4955]: I1128 06:33:44.341766 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" event={"ID":"7a10e06e-4190-4e64-a8de-3470d1277a4c","Type":"ContainerDied","Data":"709f872edd0a9dc103c20336411c5a6874ebdd419ca313058cde3d6d8b7c1650"} Nov 28 06:33:44 crc kubenswrapper[4955]: I1128 06:33:44.342094 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="709f872edd0a9dc103c20336411c5a6874ebdd419ca313058cde3d6d8b7c1650" Nov 28 06:33:44 crc kubenswrapper[4955]: I1128 06:33:44.341861 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999" Nov 28 06:33:51 crc kubenswrapper[4955]: I1128 06:33:51.140571 4955 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.702691 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd"] Nov 28 06:33:52 crc kubenswrapper[4955]: E1128 06:33:52.703192 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="pull" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703204 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="pull" Nov 28 06:33:52 crc kubenswrapper[4955]: E1128 06:33:52.703215 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerName="console" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703221 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerName="console" Nov 28 06:33:52 crc kubenswrapper[4955]: E1128 06:33:52.703231 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="util" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703237 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="util" Nov 28 06:33:52 crc kubenswrapper[4955]: E1128 06:33:52.703246 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="extract" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703251 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="extract" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703351 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="71082a13-ea8e-4a1b-af7e-fa4c3d50b8af" containerName="console" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703362 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a10e06e-4190-4e64-a8de-3470d1277a4c" containerName="extract" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.703970 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.705358 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.706229 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.706397 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.706575 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p5btc" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.706696 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.720635 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd"] Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.793087 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkq5v\" (UniqueName: \"kubernetes.io/projected/897e2a63-d58f-4bf7-b954-7614a0b8011b-kube-api-access-pkq5v\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.793214 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.793246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-webhook-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.894225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkq5v\" (UniqueName: \"kubernetes.io/projected/897e2a63-d58f-4bf7-b954-7614a0b8011b-kube-api-access-pkq5v\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.894320 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.894366 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-webhook-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.900195 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-apiservice-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.907695 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/897e2a63-d58f-4bf7-b954-7614a0b8011b-webhook-cert\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.911630 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkq5v\" (UniqueName: \"kubernetes.io/projected/897e2a63-d58f-4bf7-b954-7614a0b8011b-kube-api-access-pkq5v\") pod \"metallb-operator-controller-manager-7ffcc65867-np4gd\" (UID: \"897e2a63-d58f-4bf7-b954-7614a0b8011b\") " pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.986609 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8"] Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.987427 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.990422 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.990652 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-slvwn" Nov 28 06:33:52 crc kubenswrapper[4955]: I1128 06:33:52.992849 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.012713 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8"] Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.023820 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.098539 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-apiservice-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.098603 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-webhook-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.098636 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtxd\" (UniqueName: \"kubernetes.io/projected/55e28775-8755-420c-9c5e-99506d84594e-kube-api-access-wmtxd\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.199222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-webhook-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.199906 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmtxd\" (UniqueName: \"kubernetes.io/projected/55e28775-8755-420c-9c5e-99506d84594e-kube-api-access-wmtxd\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.199973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-apiservice-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.203676 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-webhook-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.205968 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55e28775-8755-420c-9c5e-99506d84594e-apiservice-cert\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.220012 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmtxd\" (UniqueName: \"kubernetes.io/projected/55e28775-8755-420c-9c5e-99506d84594e-kube-api-access-wmtxd\") pod \"metallb-operator-webhook-server-674c4d7d9d-zw6p8\" (UID: \"55e28775-8755-420c-9c5e-99506d84594e\") " pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.257116 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd"] Nov 28 06:33:53 crc kubenswrapper[4955]: W1128 06:33:53.262403 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod897e2a63_d58f_4bf7_b954_7614a0b8011b.slice/crio-bb6f6409dfc713549335bc79f8331d475b01d5fb1e9a0f4b99a4774f7ed80796 WatchSource:0}: Error finding container bb6f6409dfc713549335bc79f8331d475b01d5fb1e9a0f4b99a4774f7ed80796: Status 404 returned error can't find the container with id bb6f6409dfc713549335bc79f8331d475b01d5fb1e9a0f4b99a4774f7ed80796 Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.303194 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.392424 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.392463 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.392494 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.392607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" event={"ID":"897e2a63-d58f-4bf7-b954-7614a0b8011b","Type":"ContainerStarted","Data":"bb6f6409dfc713549335bc79f8331d475b01d5fb1e9a0f4b99a4774f7ed80796"} Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.393031 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.393080 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0" gracePeriod=600 Nov 28 06:33:53 crc kubenswrapper[4955]: I1128 06:33:53.783534 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8"] Nov 28 06:33:53 crc kubenswrapper[4955]: W1128 06:33:53.788628 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55e28775_8755_420c_9c5e_99506d84594e.slice/crio-96236313f5be5c50f3973180def1a17674b3be7ef66c1fcd2675ac8c4e302fce WatchSource:0}: Error finding container 96236313f5be5c50f3973180def1a17674b3be7ef66c1fcd2675ac8c4e302fce: Status 404 returned error can't find the container with id 96236313f5be5c50f3973180def1a17674b3be7ef66c1fcd2675ac8c4e302fce Nov 28 06:33:54 crc kubenswrapper[4955]: I1128 06:33:54.398443 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" event={"ID":"55e28775-8755-420c-9c5e-99506d84594e","Type":"ContainerStarted","Data":"96236313f5be5c50f3973180def1a17674b3be7ef66c1fcd2675ac8c4e302fce"} Nov 28 06:33:54 crc kubenswrapper[4955]: I1128 06:33:54.400722 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0" exitCode=0 Nov 28 06:33:54 crc kubenswrapper[4955]: I1128 06:33:54.400750 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0"} Nov 28 06:33:54 crc kubenswrapper[4955]: I1128 06:33:54.400770 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9"} Nov 28 06:33:54 crc kubenswrapper[4955]: I1128 06:33:54.400785 4955 scope.go:117] "RemoveContainer" containerID="c4e6040241feb98903aeee5dc316e0267042bea33e703ef620d567288cd2e662" Nov 28 06:33:56 crc kubenswrapper[4955]: I1128 06:33:56.416776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" event={"ID":"897e2a63-d58f-4bf7-b954-7614a0b8011b","Type":"ContainerStarted","Data":"a385ecc9170e1062941df665187ab6eb35198c63adf4d4c03673733ed6789c79"} Nov 28 06:33:56 crc kubenswrapper[4955]: I1128 06:33:56.417306 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:33:56 crc kubenswrapper[4955]: I1128 06:33:56.454230 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" podStartSLOduration=1.540265118 podStartE2EDuration="4.454206889s" podCreationTimestamp="2025-11-28 06:33:52 +0000 UTC" firstStartedPulling="2025-11-28 06:33:53.264736174 +0000 UTC m=+755.853991744" lastFinishedPulling="2025-11-28 06:33:56.178677905 +0000 UTC m=+758.767933515" observedRunningTime="2025-11-28 06:33:56.453288093 +0000 UTC m=+759.042543663" watchObservedRunningTime="2025-11-28 06:33:56.454206889 +0000 UTC m=+759.043462469" Nov 28 06:33:59 crc kubenswrapper[4955]: I1128 06:33:59.436421 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" event={"ID":"55e28775-8755-420c-9c5e-99506d84594e","Type":"ContainerStarted","Data":"bf596248d5cb12a8ede3158686f79a9bd885ecc5c732f7db7490298b05b469a1"} Nov 28 06:33:59 crc kubenswrapper[4955]: I1128 06:33:59.436845 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:33:59 crc kubenswrapper[4955]: I1128 06:33:59.473271 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" podStartSLOduration=2.766685424 podStartE2EDuration="7.473235885s" podCreationTimestamp="2025-11-28 06:33:52 +0000 UTC" firstStartedPulling="2025-11-28 06:33:53.791368885 +0000 UTC m=+756.380624445" lastFinishedPulling="2025-11-28 06:33:58.497919336 +0000 UTC m=+761.087174906" observedRunningTime="2025-11-28 06:33:59.468163623 +0000 UTC m=+762.057419253" watchObservedRunningTime="2025-11-28 06:33:59.473235885 +0000 UTC m=+762.062491485" Nov 28 06:34:10 crc kubenswrapper[4955]: I1128 06:34:10.860005 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:10 crc kubenswrapper[4955]: I1128 06:34:10.861456 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:10 crc kubenswrapper[4955]: I1128 06:34:10.924216 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.040541 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ttj\" (UniqueName: \"kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.040599 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.040667 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.142171 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ttj\" (UniqueName: \"kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.142234 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.142285 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.142761 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.142875 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.170314 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ttj\" (UniqueName: \"kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj\") pod \"certified-operators-x65lw\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.175316 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.389492 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:11 crc kubenswrapper[4955]: I1128 06:34:11.509587 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerStarted","Data":"9122d8fd016037274e5f76e886f5637345de1e98bf44e1da95f3b9fe6d05e2df"} Nov 28 06:34:12 crc kubenswrapper[4955]: I1128 06:34:12.528020 4955 generic.go:334] "Generic (PLEG): container finished" podID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerID="00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131" exitCode=0 Nov 28 06:34:12 crc kubenswrapper[4955]: I1128 06:34:12.528072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerDied","Data":"00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131"} Nov 28 06:34:13 crc kubenswrapper[4955]: I1128 06:34:13.309519 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-674c4d7d9d-zw6p8" Nov 28 06:34:13 crc kubenswrapper[4955]: I1128 06:34:13.537097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerStarted","Data":"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457"} Nov 28 06:34:14 crc kubenswrapper[4955]: I1128 06:34:14.544036 4955 generic.go:334] "Generic (PLEG): container finished" podID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerID="0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457" exitCode=0 Nov 28 06:34:14 crc kubenswrapper[4955]: I1128 06:34:14.544158 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerDied","Data":"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457"} Nov 28 06:34:16 crc kubenswrapper[4955]: I1128 06:34:16.561297 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerStarted","Data":"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1"} Nov 28 06:34:16 crc kubenswrapper[4955]: I1128 06:34:16.584254 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x65lw" podStartSLOduration=3.018401641 podStartE2EDuration="6.584230085s" podCreationTimestamp="2025-11-28 06:34:10 +0000 UTC" firstStartedPulling="2025-11-28 06:34:12.530905386 +0000 UTC m=+775.120160966" lastFinishedPulling="2025-11-28 06:34:16.09673384 +0000 UTC m=+778.685989410" observedRunningTime="2025-11-28 06:34:16.581829838 +0000 UTC m=+779.171085428" watchObservedRunningTime="2025-11-28 06:34:16.584230085 +0000 UTC m=+779.173485695" Nov 28 06:34:21 crc kubenswrapper[4955]: I1128 06:34:21.176007 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:21 crc kubenswrapper[4955]: I1128 06:34:21.176820 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:21 crc kubenswrapper[4955]: I1128 06:34:21.228863 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:21 crc kubenswrapper[4955]: I1128 06:34:21.670276 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:23 crc kubenswrapper[4955]: I1128 06:34:23.651120 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:23 crc kubenswrapper[4955]: I1128 06:34:23.651969 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x65lw" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="registry-server" containerID="cri-o://694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1" gracePeriod=2 Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.116309 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.236192 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities\") pod \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.236243 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content\") pod \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.236310 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ttj\" (UniqueName: \"kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj\") pod \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\" (UID: \"eeccc60e-b6ca-47e6-a8b0-3d363007efe2\") " Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.236900 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities" (OuterVolumeSpecName: "utilities") pod "eeccc60e-b6ca-47e6-a8b0-3d363007efe2" (UID: "eeccc60e-b6ca-47e6-a8b0-3d363007efe2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.245715 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj" (OuterVolumeSpecName: "kube-api-access-h4ttj") pod "eeccc60e-b6ca-47e6-a8b0-3d363007efe2" (UID: "eeccc60e-b6ca-47e6-a8b0-3d363007efe2"). InnerVolumeSpecName "kube-api-access-h4ttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.337972 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.338025 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ttj\" (UniqueName: \"kubernetes.io/projected/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-kube-api-access-h4ttj\") on node \"crc\" DevicePath \"\"" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.359893 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeccc60e-b6ca-47e6-a8b0-3d363007efe2" (UID: "eeccc60e-b6ca-47e6-a8b0-3d363007efe2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.439848 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeccc60e-b6ca-47e6-a8b0-3d363007efe2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.635968 4955 generic.go:334] "Generic (PLEG): container finished" podID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerID="694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1" exitCode=0 Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.636013 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerDied","Data":"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1"} Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.636042 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x65lw" event={"ID":"eeccc60e-b6ca-47e6-a8b0-3d363007efe2","Type":"ContainerDied","Data":"9122d8fd016037274e5f76e886f5637345de1e98bf44e1da95f3b9fe6d05e2df"} Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.636066 4955 scope.go:117] "RemoveContainer" containerID="694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.636121 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x65lw" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.661177 4955 scope.go:117] "RemoveContainer" containerID="0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.676971 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.681933 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x65lw"] Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.707944 4955 scope.go:117] "RemoveContainer" containerID="00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.724343 4955 scope.go:117] "RemoveContainer" containerID="694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1" Nov 28 06:34:24 crc kubenswrapper[4955]: E1128 06:34:24.724927 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1\": container with ID starting with 694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1 not found: ID does not exist" containerID="694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.724969 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1"} err="failed to get container status \"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1\": rpc error: code = NotFound desc = could not find container \"694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1\": container with ID starting with 694d8770753a0a8bf3821a8e863326b85e6ebff66ef1d9f666a9f3da1e8f30b1 not found: ID does not exist" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.724995 4955 scope.go:117] "RemoveContainer" containerID="0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457" Nov 28 06:34:24 crc kubenswrapper[4955]: E1128 06:34:24.725284 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457\": container with ID starting with 0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457 not found: ID does not exist" containerID="0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.725312 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457"} err="failed to get container status \"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457\": rpc error: code = NotFound desc = could not find container \"0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457\": container with ID starting with 0ac250513b85d038b09bf0ec0c94242eadcab9316e879e417c3b75143eacb457 not found: ID does not exist" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.725331 4955 scope.go:117] "RemoveContainer" containerID="00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131" Nov 28 06:34:24 crc kubenswrapper[4955]: E1128 06:34:24.725588 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131\": container with ID starting with 00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131 not found: ID does not exist" containerID="00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131" Nov 28 06:34:24 crc kubenswrapper[4955]: I1128 06:34:24.725627 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131"} err="failed to get container status \"00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131\": rpc error: code = NotFound desc = could not find container \"00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131\": container with ID starting with 00367ee2752e1d95f8f90be885a550e89e99ed70c154d83f5f15abf1c5d39131 not found: ID does not exist" Nov 28 06:34:25 crc kubenswrapper[4955]: I1128 06:34:25.717894 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" path="/var/lib/kubelet/pods/eeccc60e-b6ca-47e6-a8b0-3d363007efe2/volumes" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.028711 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7ffcc65867-np4gd" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.738794 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8"] Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.739076 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="extract-content" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.739101 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="extract-content" Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.739121 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="extract-utilities" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.739135 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="extract-utilities" Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.739161 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="registry-server" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.739172 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="registry-server" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.739406 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeccc60e-b6ca-47e6-a8b0-3d363007efe2" containerName="registry-server" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.739935 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8"] Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.740045 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.743589 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zpb2b" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.743964 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.748447 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-m4zws"] Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.750685 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.752634 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.752823 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.773639 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtfw\" (UniqueName: \"kubernetes.io/projected/73d34f84-8626-4d9a-9f32-e5b041f75636-kube-api-access-ldtfw\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.773798 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73d34f84-8626-4d9a-9f32-e5b041f75636-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.788043 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cmlb7"] Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.788889 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.795818 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.796234 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.796344 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.804115 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-n86gs" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.814795 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-sjhcg"] Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.815623 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.816914 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.824217 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-sjhcg"] Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874435 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-reloader\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874474 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874496 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-conf\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874564 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-cert\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874627 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zh7x\" (UniqueName: \"kubernetes.io/projected/1f30adb5-d334-4ab0-9acc-8c83ca002efa-kube-api-access-9zh7x\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874668 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-startup\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874686 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-sockets\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-metrics-certs\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874771 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metallb-excludel2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874790 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtfw\" (UniqueName: \"kubernetes.io/projected/73d34f84-8626-4d9a-9f32-e5b041f75636-kube-api-access-ldtfw\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874851 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmb2\" (UniqueName: \"kubernetes.io/projected/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-kube-api-access-xdmb2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874896 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874918 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics-certs\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874969 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44wc\" (UniqueName: \"kubernetes.io/projected/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-kube-api-access-v44wc\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.874988 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73d34f84-8626-4d9a-9f32-e5b041f75636-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.883151 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/73d34f84-8626-4d9a-9f32-e5b041f75636-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.902118 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtfw\" (UniqueName: \"kubernetes.io/projected/73d34f84-8626-4d9a-9f32-e5b041f75636-kube-api-access-ldtfw\") pod \"frr-k8s-webhook-server-7fcb986d4-hgxx8\" (UID: \"73d34f84-8626-4d9a-9f32-e5b041f75636\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976616 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976675 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-conf\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976693 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-cert\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976751 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zh7x\" (UniqueName: \"kubernetes.io/projected/1f30adb5-d334-4ab0-9acc-8c83ca002efa-kube-api-access-9zh7x\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976776 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-startup\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976792 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-sockets\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.976991 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-metrics-certs\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977009 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metallb-excludel2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.976811 4955 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.977093 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist podName:ca7c2d77-7f33-4e39-8cc0-4ac415b9d430 nodeName:}" failed. No retries permitted until 2025-11-28 06:34:34.477076686 +0000 UTC m=+797.066332256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist") pod "speaker-cmlb7" (UID: "ca7c2d77-7f33-4e39-8cc0-4ac415b9d430") : secret "metallb-memberlist" not found Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977170 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-conf\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977203 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-sockets\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977555 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977835 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metallb-excludel2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmb2\" (UniqueName: \"kubernetes.io/projected/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-kube-api-access-xdmb2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977939 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.977908 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-frr-startup\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.978006 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics-certs\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.978075 4955 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 28 06:34:33 crc kubenswrapper[4955]: E1128 06:34:33.978101 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs podName:ca7c2d77-7f33-4e39-8cc0-4ac415b9d430 nodeName:}" failed. No retries permitted until 2025-11-28 06:34:34.478093625 +0000 UTC m=+797.067349195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs") pod "speaker-cmlb7" (UID: "ca7c2d77-7f33-4e39-8cc0-4ac415b9d430") : secret "speaker-certs-secret" not found Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.978139 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44wc\" (UniqueName: \"kubernetes.io/projected/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-kube-api-access-v44wc\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.978457 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-reloader\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.978788 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-reloader\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.980299 4955 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.980532 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-metrics-certs\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.981930 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-metrics-certs\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.990388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f30adb5-d334-4ab0-9acc-8c83ca002efa-cert\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.997492 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmb2\" (UniqueName: \"kubernetes.io/projected/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-kube-api-access-xdmb2\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:33 crc kubenswrapper[4955]: I1128 06:34:33.999284 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zh7x\" (UniqueName: \"kubernetes.io/projected/1f30adb5-d334-4ab0-9acc-8c83ca002efa-kube-api-access-9zh7x\") pod \"controller-f8648f98b-sjhcg\" (UID: \"1f30adb5-d334-4ab0-9acc-8c83ca002efa\") " pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.000661 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44wc\" (UniqueName: \"kubernetes.io/projected/fe5b3b04-5092-4f4f-b2e4-9b4ede37f887-kube-api-access-v44wc\") pod \"frr-k8s-m4zws\" (UID: \"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887\") " pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.070206 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.083317 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.175328 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.306495 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8"] Nov 28 06:34:34 crc kubenswrapper[4955]: W1128 06:34:34.311407 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73d34f84_8626_4d9a_9f32_e5b041f75636.slice/crio-7e20c52f96ef04eed369a2648628633d12e809db958ab8dc7da4efece7c66b50 WatchSource:0}: Error finding container 7e20c52f96ef04eed369a2648628633d12e809db958ab8dc7da4efece7c66b50: Status 404 returned error can't find the container with id 7e20c52f96ef04eed369a2648628633d12e809db958ab8dc7da4efece7c66b50 Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.386768 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-sjhcg"] Nov 28 06:34:34 crc kubenswrapper[4955]: W1128 06:34:34.396592 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f30adb5_d334_4ab0_9acc_8c83ca002efa.slice/crio-dad2f2bf3073cd8acc6b014d6531c19113186c23457e9f1ed52c112e348d0db9 WatchSource:0}: Error finding container dad2f2bf3073cd8acc6b014d6531c19113186c23457e9f1ed52c112e348d0db9: Status 404 returned error can't find the container with id dad2f2bf3073cd8acc6b014d6531c19113186c23457e9f1ed52c112e348d0db9 Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.485768 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.486150 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:34 crc kubenswrapper[4955]: E1128 06:34:34.486264 4955 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 06:34:34 crc kubenswrapper[4955]: E1128 06:34:34.486329 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist podName:ca7c2d77-7f33-4e39-8cc0-4ac415b9d430 nodeName:}" failed. No retries permitted until 2025-11-28 06:34:35.486312491 +0000 UTC m=+798.075568071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist") pod "speaker-cmlb7" (UID: "ca7c2d77-7f33-4e39-8cc0-4ac415b9d430") : secret "metallb-memberlist" not found Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.492127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-metrics-certs\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.707342 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-sjhcg" event={"ID":"1f30adb5-d334-4ab0-9acc-8c83ca002efa","Type":"ContainerStarted","Data":"fe60f726e2e1431738cad46aeb4bb2cc6fbd3f9d56a731ed69ea7e51a87519c9"} Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.707383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-sjhcg" event={"ID":"1f30adb5-d334-4ab0-9acc-8c83ca002efa","Type":"ContainerStarted","Data":"26e7905407137e7ec3e7505764378b7eeb6abb3aa32e04389a1cbb871b1e46a2"} Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.707394 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-sjhcg" event={"ID":"1f30adb5-d334-4ab0-9acc-8c83ca002efa","Type":"ContainerStarted","Data":"dad2f2bf3073cd8acc6b014d6531c19113186c23457e9f1ed52c112e348d0db9"} Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.707408 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.708410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" event={"ID":"73d34f84-8626-4d9a-9f32-e5b041f75636","Type":"ContainerStarted","Data":"7e20c52f96ef04eed369a2648628633d12e809db958ab8dc7da4efece7c66b50"} Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.709267 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"5bcc9970aa7126347cc63b6c547b2328f5b26c3cbb6c8765d26a8f7d471201fe"} Nov 28 06:34:34 crc kubenswrapper[4955]: I1128 06:34:34.722400 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-sjhcg" podStartSLOduration=1.722382978 podStartE2EDuration="1.722382978s" podCreationTimestamp="2025-11-28 06:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:34:34.718988273 +0000 UTC m=+797.308243843" watchObservedRunningTime="2025-11-28 06:34:34.722382978 +0000 UTC m=+797.311638548" Nov 28 06:34:35 crc kubenswrapper[4955]: I1128 06:34:35.501815 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:35 crc kubenswrapper[4955]: I1128 06:34:35.508559 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ca7c2d77-7f33-4e39-8cc0-4ac415b9d430-memberlist\") pod \"speaker-cmlb7\" (UID: \"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430\") " pod="metallb-system/speaker-cmlb7" Nov 28 06:34:35 crc kubenswrapper[4955]: I1128 06:34:35.663413 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cmlb7" Nov 28 06:34:35 crc kubenswrapper[4955]: W1128 06:34:35.695403 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca7c2d77_7f33_4e39_8cc0_4ac415b9d430.slice/crio-12f09cf00ff89d3484cc51cfba542c19b2252a8b2873e43388f6f6a40620ad3d WatchSource:0}: Error finding container 12f09cf00ff89d3484cc51cfba542c19b2252a8b2873e43388f6f6a40620ad3d: Status 404 returned error can't find the container with id 12f09cf00ff89d3484cc51cfba542c19b2252a8b2873e43388f6f6a40620ad3d Nov 28 06:34:35 crc kubenswrapper[4955]: I1128 06:34:35.720802 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmlb7" event={"ID":"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430","Type":"ContainerStarted","Data":"12f09cf00ff89d3484cc51cfba542c19b2252a8b2873e43388f6f6a40620ad3d"} Nov 28 06:34:36 crc kubenswrapper[4955]: I1128 06:34:36.747907 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmlb7" event={"ID":"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430","Type":"ContainerStarted","Data":"6a8a0992a61220bf8f143e01e0b5df397d99acccab8eab5e218fb2058fd96f27"} Nov 28 06:34:36 crc kubenswrapper[4955]: I1128 06:34:36.748257 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cmlb7" event={"ID":"ca7c2d77-7f33-4e39-8cc0-4ac415b9d430","Type":"ContainerStarted","Data":"7f3f06684dceffddfe49005574d048cbd6c66acd6c8edbd80caab472baa5f8b7"} Nov 28 06:34:36 crc kubenswrapper[4955]: I1128 06:34:36.748400 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cmlb7" Nov 28 06:34:36 crc kubenswrapper[4955]: I1128 06:34:36.764481 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cmlb7" podStartSLOduration=3.76446543 podStartE2EDuration="3.76446543s" podCreationTimestamp="2025-11-28 06:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:34:36.761596499 +0000 UTC m=+799.350852079" watchObservedRunningTime="2025-11-28 06:34:36.76446543 +0000 UTC m=+799.353721000" Nov 28 06:34:41 crc kubenswrapper[4955]: I1128 06:34:41.782874 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe5b3b04-5092-4f4f-b2e4-9b4ede37f887" containerID="23bf96ddcd6dc37170e3f9a64225ba2835f7e210e38a4c14a988bd40ad0445c0" exitCode=0 Nov 28 06:34:41 crc kubenswrapper[4955]: I1128 06:34:41.782975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerDied","Data":"23bf96ddcd6dc37170e3f9a64225ba2835f7e210e38a4c14a988bd40ad0445c0"} Nov 28 06:34:41 crc kubenswrapper[4955]: I1128 06:34:41.790556 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" event={"ID":"73d34f84-8626-4d9a-9f32-e5b041f75636","Type":"ContainerStarted","Data":"d4445e2191ee5642dfabcb7f51568a15183931fcbb184fadcce1274e27282192"} Nov 28 06:34:41 crc kubenswrapper[4955]: I1128 06:34:41.790892 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:41 crc kubenswrapper[4955]: I1128 06:34:41.847120 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" podStartSLOduration=1.7727967420000001 podStartE2EDuration="8.847097582s" podCreationTimestamp="2025-11-28 06:34:33 +0000 UTC" firstStartedPulling="2025-11-28 06:34:34.314361011 +0000 UTC m=+796.903616581" lastFinishedPulling="2025-11-28 06:34:41.388661861 +0000 UTC m=+803.977917421" observedRunningTime="2025-11-28 06:34:41.846977459 +0000 UTC m=+804.436233059" watchObservedRunningTime="2025-11-28 06:34:41.847097582 +0000 UTC m=+804.436353172" Nov 28 06:34:42 crc kubenswrapper[4955]: I1128 06:34:42.799317 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe5b3b04-5092-4f4f-b2e4-9b4ede37f887" containerID="dd8ea7036192d608422c547b5b849021c8db3a84a27316927d76b8ee9d96867a" exitCode=0 Nov 28 06:34:42 crc kubenswrapper[4955]: I1128 06:34:42.799384 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerDied","Data":"dd8ea7036192d608422c547b5b849021c8db3a84a27316927d76b8ee9d96867a"} Nov 28 06:34:43 crc kubenswrapper[4955]: I1128 06:34:43.809542 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe5b3b04-5092-4f4f-b2e4-9b4ede37f887" containerID="535a7fdad018308ff702f604242550c78d8db75dd2b03abc309b3929722d49bf" exitCode=0 Nov 28 06:34:43 crc kubenswrapper[4955]: I1128 06:34:43.809641 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerDied","Data":"535a7fdad018308ff702f604242550c78d8db75dd2b03abc309b3929722d49bf"} Nov 28 06:34:44 crc kubenswrapper[4955]: I1128 06:34:44.182381 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-sjhcg" Nov 28 06:34:44 crc kubenswrapper[4955]: I1128 06:34:44.836793 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"76cb3c6f4bac761fbebe9fe0af7e448080cf342a4858ab9e94d8bac1cb959ef1"} Nov 28 06:34:44 crc kubenswrapper[4955]: I1128 06:34:44.837155 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"e0c89112d95d143bbb88af2dac392f0a8a315c40a25043031587233a65ed17b8"} Nov 28 06:34:44 crc kubenswrapper[4955]: I1128 06:34:44.837167 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"82c3ba508666ff4a646ad4b91c823680807145105508a34e8a6fdb1f465a75a2"} Nov 28 06:34:44 crc kubenswrapper[4955]: I1128 06:34:44.837176 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"a161d0dfeffec2d4b4199c895cf891fc3e7de78b9189383ce656c4cc6109cdc5"} Nov 28 06:34:45 crc kubenswrapper[4955]: I1128 06:34:45.668772 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cmlb7" Nov 28 06:34:45 crc kubenswrapper[4955]: I1128 06:34:45.846760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"57fdd437a40415a4591489e287498c9138c02daf826c32694db7f854f7fd8326"} Nov 28 06:34:45 crc kubenswrapper[4955]: I1128 06:34:45.846821 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-m4zws" event={"ID":"fe5b3b04-5092-4f4f-b2e4-9b4ede37f887","Type":"ContainerStarted","Data":"260fbb2f9cfbbb1d8ccc5e8ba9a1dcf13868845695d752d9c537d5610588b2f9"} Nov 28 06:34:45 crc kubenswrapper[4955]: I1128 06:34:45.848130 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:45 crc kubenswrapper[4955]: I1128 06:34:45.871229 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-m4zws" podStartSLOduration=5.879818136 podStartE2EDuration="12.871196992s" podCreationTimestamp="2025-11-28 06:34:33 +0000 UTC" firstStartedPulling="2025-11-28 06:34:34.392025188 +0000 UTC m=+796.981280758" lastFinishedPulling="2025-11-28 06:34:41.383404054 +0000 UTC m=+803.972659614" observedRunningTime="2025-11-28 06:34:45.865262016 +0000 UTC m=+808.454517596" watchObservedRunningTime="2025-11-28 06:34:45.871196992 +0000 UTC m=+808.460452562" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.414158 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.415058 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.416869 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zjbm4" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.417396 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.417556 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.419336 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.525132 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt65f\" (UniqueName: \"kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f\") pod \"openstack-operator-index-km7d6\" (UID: \"378a833c-0977-4894-8124-188101f0680c\") " pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.625900 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt65f\" (UniqueName: \"kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f\") pod \"openstack-operator-index-km7d6\" (UID: \"378a833c-0977-4894-8124-188101f0680c\") " pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.653367 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt65f\" (UniqueName: \"kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f\") pod \"openstack-operator-index-km7d6\" (UID: \"378a833c-0977-4894-8124-188101f0680c\") " pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:48 crc kubenswrapper[4955]: I1128 06:34:48.740218 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:49 crc kubenswrapper[4955]: I1128 06:34:49.084537 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:49 crc kubenswrapper[4955]: I1128 06:34:49.147138 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:49 crc kubenswrapper[4955]: I1128 06:34:49.153707 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:49 crc kubenswrapper[4955]: W1128 06:34:49.154671 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod378a833c_0977_4894_8124_188101f0680c.slice/crio-3e5a1229ab90afa119117757b66648d9aa4d05a63b4c885d8924bc59d854cd62 WatchSource:0}: Error finding container 3e5a1229ab90afa119117757b66648d9aa4d05a63b4c885d8924bc59d854cd62: Status 404 returned error can't find the container with id 3e5a1229ab90afa119117757b66648d9aa4d05a63b4c885d8924bc59d854cd62 Nov 28 06:34:49 crc kubenswrapper[4955]: I1128 06:34:49.874099 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-km7d6" event={"ID":"378a833c-0977-4894-8124-188101f0680c","Type":"ContainerStarted","Data":"3e5a1229ab90afa119117757b66648d9aa4d05a63b4c885d8924bc59d854cd62"} Nov 28 06:34:51 crc kubenswrapper[4955]: I1128 06:34:51.783667 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.388162 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6zdst"] Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.389322 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.414306 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6zdst"] Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.476932 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qks7\" (UniqueName: \"kubernetes.io/projected/739948d5-645f-4c91-9372-588a7128b7b2-kube-api-access-7qks7\") pod \"openstack-operator-index-6zdst\" (UID: \"739948d5-645f-4c91-9372-588a7128b7b2\") " pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.578139 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qks7\" (UniqueName: \"kubernetes.io/projected/739948d5-645f-4c91-9372-588a7128b7b2-kube-api-access-7qks7\") pod \"openstack-operator-index-6zdst\" (UID: \"739948d5-645f-4c91-9372-588a7128b7b2\") " pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.610730 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qks7\" (UniqueName: \"kubernetes.io/projected/739948d5-645f-4c91-9372-588a7128b7b2-kube-api-access-7qks7\") pod \"openstack-operator-index-6zdst\" (UID: \"739948d5-645f-4c91-9372-588a7128b7b2\") " pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.710020 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.897932 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-km7d6" event={"ID":"378a833c-0977-4894-8124-188101f0680c","Type":"ContainerStarted","Data":"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2"} Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.898028 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-km7d6" podUID="378a833c-0977-4894-8124-188101f0680c" containerName="registry-server" containerID="cri-o://944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2" gracePeriod=2 Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.923027 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-km7d6" podStartSLOduration=1.881938536 podStartE2EDuration="4.923009123s" podCreationTimestamp="2025-11-28 06:34:48 +0000 UTC" firstStartedPulling="2025-11-28 06:34:49.15854636 +0000 UTC m=+811.747801970" lastFinishedPulling="2025-11-28 06:34:52.199616947 +0000 UTC m=+814.788872557" observedRunningTime="2025-11-28 06:34:52.917481683 +0000 UTC m=+815.506737253" watchObservedRunningTime="2025-11-28 06:34:52.923009123 +0000 UTC m=+815.512264693" Nov 28 06:34:52 crc kubenswrapper[4955]: I1128 06:34:52.965437 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6zdst"] Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.311857 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.391020 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt65f\" (UniqueName: \"kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f\") pod \"378a833c-0977-4894-8124-188101f0680c\" (UID: \"378a833c-0977-4894-8124-188101f0680c\") " Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.399100 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f" (OuterVolumeSpecName: "kube-api-access-dt65f") pod "378a833c-0977-4894-8124-188101f0680c" (UID: "378a833c-0977-4894-8124-188101f0680c"). InnerVolumeSpecName "kube-api-access-dt65f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.492401 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt65f\" (UniqueName: \"kubernetes.io/projected/378a833c-0977-4894-8124-188101f0680c-kube-api-access-dt65f\") on node \"crc\" DevicePath \"\"" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.906889 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6zdst" event={"ID":"739948d5-645f-4c91-9372-588a7128b7b2","Type":"ContainerStarted","Data":"4b29f1a22ce791284c93762fe13b4805e70855f4d9cc109801a7059696da4138"} Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.907926 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6zdst" event={"ID":"739948d5-645f-4c91-9372-588a7128b7b2","Type":"ContainerStarted","Data":"2663d6b91552eebc47da34911f4b3dbab160ea86368cddc098cf6241e65f7212"} Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.908968 4955 generic.go:334] "Generic (PLEG): container finished" podID="378a833c-0977-4894-8124-188101f0680c" containerID="944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2" exitCode=0 Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.909017 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-km7d6" event={"ID":"378a833c-0977-4894-8124-188101f0680c","Type":"ContainerDied","Data":"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2"} Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.909049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-km7d6" event={"ID":"378a833c-0977-4894-8124-188101f0680c","Type":"ContainerDied","Data":"3e5a1229ab90afa119117757b66648d9aa4d05a63b4c885d8924bc59d854cd62"} Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.909069 4955 scope.go:117] "RemoveContainer" containerID="944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.909081 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-km7d6" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.932287 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6zdst" podStartSLOduration=1.861175687 podStartE2EDuration="1.932265571s" podCreationTimestamp="2025-11-28 06:34:52 +0000 UTC" firstStartedPulling="2025-11-28 06:34:53.013669673 +0000 UTC m=+815.602925243" lastFinishedPulling="2025-11-28 06:34:53.084759557 +0000 UTC m=+815.674015127" observedRunningTime="2025-11-28 06:34:53.925777063 +0000 UTC m=+816.515032643" watchObservedRunningTime="2025-11-28 06:34:53.932265571 +0000 UTC m=+816.521521141" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.942675 4955 scope.go:117] "RemoveContainer" containerID="944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2" Nov 28 06:34:53 crc kubenswrapper[4955]: E1128 06:34:53.943388 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2\": container with ID starting with 944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2 not found: ID does not exist" containerID="944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.943460 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2"} err="failed to get container status \"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2\": rpc error: code = NotFound desc = could not find container \"944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2\": container with ID starting with 944dc233380f3b789acbeb3fb3c460a9147ab5f3d1f853ff36acf492f5118ce2 not found: ID does not exist" Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.953548 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:53 crc kubenswrapper[4955]: I1128 06:34:53.960283 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-km7d6"] Nov 28 06:34:54 crc kubenswrapper[4955]: I1128 06:34:54.077158 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-hgxx8" Nov 28 06:34:54 crc kubenswrapper[4955]: I1128 06:34:54.086672 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-m4zws" Nov 28 06:34:55 crc kubenswrapper[4955]: I1128 06:34:55.718372 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378a833c-0977-4894-8124-188101f0680c" path="/var/lib/kubelet/pods/378a833c-0977-4894-8124-188101f0680c/volumes" Nov 28 06:35:02 crc kubenswrapper[4955]: I1128 06:35:02.710738 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:35:02 crc kubenswrapper[4955]: I1128 06:35:02.711092 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:35:02 crc kubenswrapper[4955]: I1128 06:35:02.756832 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:35:03 crc kubenswrapper[4955]: I1128 06:35:03.007173 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6zdst" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.241850 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl"] Nov 28 06:35:04 crc kubenswrapper[4955]: E1128 06:35:04.242304 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378a833c-0977-4894-8124-188101f0680c" containerName="registry-server" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.242332 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="378a833c-0977-4894-8124-188101f0680c" containerName="registry-server" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.242634 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="378a833c-0977-4894-8124-188101f0680c" containerName="registry-server" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.244481 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.247621 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-497kl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.249126 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl"] Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.382294 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.382406 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csk4q\" (UniqueName: \"kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.382471 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.484645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.484737 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csk4q\" (UniqueName: \"kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.484821 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.485309 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.485546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.504543 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csk4q\" (UniqueName: \"kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q\") pod \"8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.570381 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.815549 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl"] Nov 28 06:35:04 crc kubenswrapper[4955]: W1128 06:35:04.819158 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf247d46_e077_45be_af71_143bfc2cd71c.slice/crio-4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210 WatchSource:0}: Error finding container 4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210: Status 404 returned error can't find the container with id 4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210 Nov 28 06:35:04 crc kubenswrapper[4955]: I1128 06:35:04.993074 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" event={"ID":"af247d46-e077-45be-af71-143bfc2cd71c","Type":"ContainerStarted","Data":"4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210"} Nov 28 06:35:06 crc kubenswrapper[4955]: I1128 06:35:06.004262 4955 generic.go:334] "Generic (PLEG): container finished" podID="af247d46-e077-45be-af71-143bfc2cd71c" containerID="021c4cf41de7ca70596293682f3824a4621af0dc73f6ea0a556b6eceb0954054" exitCode=0 Nov 28 06:35:06 crc kubenswrapper[4955]: I1128 06:35:06.004341 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" event={"ID":"af247d46-e077-45be-af71-143bfc2cd71c","Type":"ContainerDied","Data":"021c4cf41de7ca70596293682f3824a4621af0dc73f6ea0a556b6eceb0954054"} Nov 28 06:35:07 crc kubenswrapper[4955]: I1128 06:35:07.015093 4955 generic.go:334] "Generic (PLEG): container finished" podID="af247d46-e077-45be-af71-143bfc2cd71c" containerID="e4c9de9e93c90abae7a7bc3f6b18e2d40cd6a480da3824a227db1519f5fa726f" exitCode=0 Nov 28 06:35:07 crc kubenswrapper[4955]: I1128 06:35:07.015197 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" event={"ID":"af247d46-e077-45be-af71-143bfc2cd71c","Type":"ContainerDied","Data":"e4c9de9e93c90abae7a7bc3f6b18e2d40cd6a480da3824a227db1519f5fa726f"} Nov 28 06:35:08 crc kubenswrapper[4955]: I1128 06:35:08.025947 4955 generic.go:334] "Generic (PLEG): container finished" podID="af247d46-e077-45be-af71-143bfc2cd71c" containerID="adce3a6e5b841d4534872a2bb88c3149b46e752420e2d3edfeaee1a0c6fa736c" exitCode=0 Nov 28 06:35:08 crc kubenswrapper[4955]: I1128 06:35:08.026004 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" event={"ID":"af247d46-e077-45be-af71-143bfc2cd71c","Type":"ContainerDied","Data":"adce3a6e5b841d4534872a2bb88c3149b46e752420e2d3edfeaee1a0c6fa736c"} Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.297782 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.464677 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util\") pod \"af247d46-e077-45be-af71-143bfc2cd71c\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.464806 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csk4q\" (UniqueName: \"kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q\") pod \"af247d46-e077-45be-af71-143bfc2cd71c\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.464978 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle\") pod \"af247d46-e077-45be-af71-143bfc2cd71c\" (UID: \"af247d46-e077-45be-af71-143bfc2cd71c\") " Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.465661 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle" (OuterVolumeSpecName: "bundle") pod "af247d46-e077-45be-af71-143bfc2cd71c" (UID: "af247d46-e077-45be-af71-143bfc2cd71c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.465960 4955 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.470589 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q" (OuterVolumeSpecName: "kube-api-access-csk4q") pod "af247d46-e077-45be-af71-143bfc2cd71c" (UID: "af247d46-e077-45be-af71-143bfc2cd71c"). InnerVolumeSpecName "kube-api-access-csk4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.477445 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util" (OuterVolumeSpecName: "util") pod "af247d46-e077-45be-af71-143bfc2cd71c" (UID: "af247d46-e077-45be-af71-143bfc2cd71c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.566920 4955 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af247d46-e077-45be-af71-143bfc2cd71c-util\") on node \"crc\" DevicePath \"\"" Nov 28 06:35:09 crc kubenswrapper[4955]: I1128 06:35:09.566971 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csk4q\" (UniqueName: \"kubernetes.io/projected/af247d46-e077-45be-af71-143bfc2cd71c-kube-api-access-csk4q\") on node \"crc\" DevicePath \"\"" Nov 28 06:35:10 crc kubenswrapper[4955]: I1128 06:35:10.041291 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" event={"ID":"af247d46-e077-45be-af71-143bfc2cd71c","Type":"ContainerDied","Data":"4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210"} Nov 28 06:35:10 crc kubenswrapper[4955]: I1128 06:35:10.041330 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c69bc9df3da98dd5ad414a5e79f498fcf73cb64bc41d62375ef32cbf61b8210" Nov 28 06:35:10 crc kubenswrapper[4955]: I1128 06:35:10.041405 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl" Nov 28 06:35:15 crc kubenswrapper[4955]: I1128 06:35:15.207797 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-m4zws" podUID="fe5b3b04-5092-4f4f-b2e4-9b4ede37f887" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.859650 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq"] Nov 28 06:35:16 crc kubenswrapper[4955]: E1128 06:35:16.860133 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="util" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.860145 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="util" Nov 28 06:35:16 crc kubenswrapper[4955]: E1128 06:35:16.860158 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="extract" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.860163 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="extract" Nov 28 06:35:16 crc kubenswrapper[4955]: E1128 06:35:16.860176 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="pull" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.860181 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="pull" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.860272 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="af247d46-e077-45be-af71-143bfc2cd71c" containerName="extract" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.860716 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.863325 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-q2ddv" Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.888064 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq"] Nov 28 06:35:16 crc kubenswrapper[4955]: I1128 06:35:16.988924 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzkd\" (UniqueName: \"kubernetes.io/projected/a76c5381-15dd-479f-af8a-78a8c2ec2bad-kube-api-access-pvzkd\") pod \"openstack-operator-controller-operator-7d8f67c45-t6djq\" (UID: \"a76c5381-15dd-479f-af8a-78a8c2ec2bad\") " pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:17 crc kubenswrapper[4955]: I1128 06:35:17.090419 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzkd\" (UniqueName: \"kubernetes.io/projected/a76c5381-15dd-479f-af8a-78a8c2ec2bad-kube-api-access-pvzkd\") pod \"openstack-operator-controller-operator-7d8f67c45-t6djq\" (UID: \"a76c5381-15dd-479f-af8a-78a8c2ec2bad\") " pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:17 crc kubenswrapper[4955]: I1128 06:35:17.132184 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzkd\" (UniqueName: \"kubernetes.io/projected/a76c5381-15dd-479f-af8a-78a8c2ec2bad-kube-api-access-pvzkd\") pod \"openstack-operator-controller-operator-7d8f67c45-t6djq\" (UID: \"a76c5381-15dd-479f-af8a-78a8c2ec2bad\") " pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:17 crc kubenswrapper[4955]: I1128 06:35:17.184873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:17 crc kubenswrapper[4955]: I1128 06:35:17.467667 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq"] Nov 28 06:35:18 crc kubenswrapper[4955]: I1128 06:35:18.224476 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" event={"ID":"a76c5381-15dd-479f-af8a-78a8c2ec2bad","Type":"ContainerStarted","Data":"56ecb023cfe5c1d6bda93cfc536792b5dde94ef9b8cfb220655b1d69eff3eba1"} Nov 28 06:35:22 crc kubenswrapper[4955]: I1128 06:35:22.256569 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" event={"ID":"a76c5381-15dd-479f-af8a-78a8c2ec2bad","Type":"ContainerStarted","Data":"934b90fc26477010bb7d4f1f20b214c1a79ee3f68a4b544f8b22adb3555caa4d"} Nov 28 06:35:22 crc kubenswrapper[4955]: I1128 06:35:22.257176 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:22 crc kubenswrapper[4955]: I1128 06:35:22.283240 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" podStartSLOduration=2.456436693 podStartE2EDuration="6.283214517s" podCreationTimestamp="2025-11-28 06:35:16 +0000 UTC" firstStartedPulling="2025-11-28 06:35:17.479598597 +0000 UTC m=+840.068854167" lastFinishedPulling="2025-11-28 06:35:21.306376421 +0000 UTC m=+843.895631991" observedRunningTime="2025-11-28 06:35:22.282771934 +0000 UTC m=+844.872027514" watchObservedRunningTime="2025-11-28 06:35:22.283214517 +0000 UTC m=+844.872470117" Nov 28 06:35:27 crc kubenswrapper[4955]: I1128 06:35:27.188162 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7d8f67c45-t6djq" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.777888 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.779594 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.781177 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kmcfh" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.786473 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.787420 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.789399 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z9vcc" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.798560 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.831928 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-8cmf7"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.833511 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.835865 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fl5gr" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.840552 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.846075 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-8cmf7"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.858649 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.859867 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.861636 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-llb7b" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.871259 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.872305 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.882529 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.884310 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-69m4j" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.892526 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92flc\" (UniqueName: \"kubernetes.io/projected/813a8c4e-06bd-467e-9b80-0e3e88fb361a-kube-api-access-92flc\") pod \"cinder-operator-controller-manager-6b7f75547b-p5n27\" (UID: \"813a8c4e-06bd-467e-9b80-0e3e88fb361a\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.892622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nwgb\" (UniqueName: \"kubernetes.io/projected/3e51ea77-cbc1-4ebd-9247-335d93211353-kube-api-access-9nwgb\") pod \"barbican-operator-controller-manager-7b64f4fb85-5kt75\" (UID: \"3e51ea77-cbc1-4ebd-9247-335d93211353\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.915670 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.917369 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.918496 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.920146 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8vx7s" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.937611 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.938827 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.939856 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.943521 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.943813 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5gth6" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.945817 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.970553 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.971543 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.973089 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-jprcz" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.982033 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.984237 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.987260 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pnmmz" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993080 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9"] Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993677 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznj2\" (UniqueName: \"kubernetes.io/projected/d2c0d9ce-4c16-451d-948b-75ae7bbca487-kube-api-access-sznj2\") pod \"heat-operator-controller-manager-5b77f656f-lgnvq\" (UID: \"d2c0d9ce-4c16-451d-948b-75ae7bbca487\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993763 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psdm5\" (UniqueName: \"kubernetes.io/projected/8bcb6097-d2d8-4190-afbd-644daa5ce7b6-kube-api-access-psdm5\") pod \"glance-operator-controller-manager-589cbd6b5b-jtxkh\" (UID: \"8bcb6097-d2d8-4190-afbd-644daa5ce7b6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nwgb\" (UniqueName: \"kubernetes.io/projected/3e51ea77-cbc1-4ebd-9247-335d93211353-kube-api-access-9nwgb\") pod \"barbican-operator-controller-manager-7b64f4fb85-5kt75\" (UID: \"3e51ea77-cbc1-4ebd-9247-335d93211353\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993867 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92flc\" (UniqueName: \"kubernetes.io/projected/813a8c4e-06bd-467e-9b80-0e3e88fb361a-kube-api-access-92flc\") pod \"cinder-operator-controller-manager-6b7f75547b-p5n27\" (UID: \"813a8c4e-06bd-467e-9b80-0e3e88fb361a\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:35:45 crc kubenswrapper[4955]: I1128 06:35:45.993896 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnzx\" (UniqueName: \"kubernetes.io/projected/ef549437-6bef-428a-991f-b38cc613ec1e-kube-api-access-wdnzx\") pod \"designate-operator-controller-manager-955677c94-8cmf7\" (UID: \"ef549437-6bef-428a-991f-b38cc613ec1e\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.055436 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92flc\" (UniqueName: \"kubernetes.io/projected/813a8c4e-06bd-467e-9b80-0e3e88fb361a-kube-api-access-92flc\") pod \"cinder-operator-controller-manager-6b7f75547b-p5n27\" (UID: \"813a8c4e-06bd-467e-9b80-0e3e88fb361a\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.056154 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.064450 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nwgb\" (UniqueName: \"kubernetes.io/projected/3e51ea77-cbc1-4ebd-9247-335d93211353-kube-api-access-9nwgb\") pod \"barbican-operator-controller-manager-7b64f4fb85-5kt75\" (UID: \"3e51ea77-cbc1-4ebd-9247-335d93211353\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.080280 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094558 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdnzx\" (UniqueName: \"kubernetes.io/projected/ef549437-6bef-428a-991f-b38cc613ec1e-kube-api-access-wdnzx\") pod \"designate-operator-controller-manager-955677c94-8cmf7\" (UID: \"ef549437-6bef-428a-991f-b38cc613ec1e\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094599 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznj2\" (UniqueName: \"kubernetes.io/projected/d2c0d9ce-4c16-451d-948b-75ae7bbca487-kube-api-access-sznj2\") pod \"heat-operator-controller-manager-5b77f656f-lgnvq\" (UID: \"d2c0d9ce-4c16-451d-948b-75ae7bbca487\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094632 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094664 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46qv\" (UniqueName: \"kubernetes.io/projected/d8ca8a28-b011-4a61-b37d-5f84543d63bb-kube-api-access-z46qv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-mdrg9\" (UID: \"d8ca8a28-b011-4a61-b37d-5f84543d63bb\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094695 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psdm5\" (UniqueName: \"kubernetes.io/projected/8bcb6097-d2d8-4190-afbd-644daa5ce7b6-kube-api-access-psdm5\") pod \"glance-operator-controller-manager-589cbd6b5b-jtxkh\" (UID: \"8bcb6097-d2d8-4190-afbd-644daa5ce7b6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094714 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzkw\" (UniqueName: \"kubernetes.io/projected/3f5477af-57e8-4a83-95ce-9fea4d62e797-kube-api-access-lrzkw\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094758 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lpq\" (UniqueName: \"kubernetes.io/projected/245721bd-2bc5-4f42-ac45-5ae0b07cd77e-kube-api-access-77lpq\") pod \"keystone-operator-controller-manager-7b4567c7cf-4pmmp\" (UID: \"245721bd-2bc5-4f42-ac45-5ae0b07cd77e\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpv69\" (UniqueName: \"kubernetes.io/projected/84c6c0d5-d427-471a-8a54-9d3fc28264bc-kube-api-access-tpv69\") pod \"horizon-operator-controller-manager-5d494799bf-c7pkv\" (UID: \"84c6c0d5-d427-471a-8a54-9d3fc28264bc\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.094989 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cmn74" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.100851 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.102231 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.111767 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.125563 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.126657 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.131195 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7jqnf" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.145964 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psdm5\" (UniqueName: \"kubernetes.io/projected/8bcb6097-d2d8-4190-afbd-644daa5ce7b6-kube-api-access-psdm5\") pod \"glance-operator-controller-manager-589cbd6b5b-jtxkh\" (UID: \"8bcb6097-d2d8-4190-afbd-644daa5ce7b6\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.153287 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdnzx\" (UniqueName: \"kubernetes.io/projected/ef549437-6bef-428a-991f-b38cc613ec1e-kube-api-access-wdnzx\") pod \"designate-operator-controller-manager-955677c94-8cmf7\" (UID: \"ef549437-6bef-428a-991f-b38cc613ec1e\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.158210 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznj2\" (UniqueName: \"kubernetes.io/projected/d2c0d9ce-4c16-451d-948b-75ae7bbca487-kube-api-access-sznj2\") pod \"heat-operator-controller-manager-5b77f656f-lgnvq\" (UID: \"d2c0d9ce-4c16-451d-948b-75ae7bbca487\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.167579 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.180377 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.183083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.192527 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.196954 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrzkw\" (UniqueName: \"kubernetes.io/projected/3f5477af-57e8-4a83-95ce-9fea4d62e797-kube-api-access-lrzkw\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197027 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcv6\" (UniqueName: \"kubernetes.io/projected/d2d018b1-e591-4109-9b83-82bc60b2cb59-kube-api-access-6bcv6\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-kdz89\" (UID: \"d2d018b1-e591-4109-9b83-82bc60b2cb59\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197049 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lpq\" (UniqueName: \"kubernetes.io/projected/245721bd-2bc5-4f42-ac45-5ae0b07cd77e-kube-api-access-77lpq\") pod \"keystone-operator-controller-manager-7b4567c7cf-4pmmp\" (UID: \"245721bd-2bc5-4f42-ac45-5ae0b07cd77e\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197066 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpv69\" (UniqueName: \"kubernetes.io/projected/84c6c0d5-d427-471a-8a54-9d3fc28264bc-kube-api-access-tpv69\") pod \"horizon-operator-controller-manager-5d494799bf-c7pkv\" (UID: \"84c6c0d5-d427-471a-8a54-9d3fc28264bc\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197089 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjk8p\" (UniqueName: \"kubernetes.io/projected/042d3c47-fa72-4e2f-a127-2885c81ec7e4-kube-api-access-mjk8p\") pod \"manila-operator-controller-manager-5d499bf58b-mwk52\" (UID: \"042d3c47-fa72-4e2f-a127-2885c81ec7e4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197131 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197741 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.197158 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46qv\" (UniqueName: \"kubernetes.io/projected/d8ca8a28-b011-4a61-b37d-5f84543d63bb-kube-api-access-z46qv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-mdrg9\" (UID: \"d8ca8a28-b011-4a61-b37d-5f84543d63bb\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.198418 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.198465 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert podName:3f5477af-57e8-4a83-95ce-9fea4d62e797 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:46.698450402 +0000 UTC m=+869.287705962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert") pod "infra-operator-controller-manager-57548d458d-rtv6s" (UID: "3f5477af-57e8-4a83-95ce-9fea4d62e797") : secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.212559 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.213632 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.229524 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpv69\" (UniqueName: \"kubernetes.io/projected/84c6c0d5-d427-471a-8a54-9d3fc28264bc-kube-api-access-tpv69\") pod \"horizon-operator-controller-manager-5d494799bf-c7pkv\" (UID: \"84c6c0d5-d427-471a-8a54-9d3fc28264bc\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.234771 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.242665 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.251073 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-76n5m" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.252212 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.255331 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.257940 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8t6tl" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.271205 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46qv\" (UniqueName: \"kubernetes.io/projected/d8ca8a28-b011-4a61-b37d-5f84543d63bb-kube-api-access-z46qv\") pod \"ironic-operator-controller-manager-67cb4dc6d4-mdrg9\" (UID: \"d8ca8a28-b011-4a61-b37d-5f84543d63bb\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.275133 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lpq\" (UniqueName: \"kubernetes.io/projected/245721bd-2bc5-4f42-ac45-5ae0b07cd77e-kube-api-access-77lpq\") pod \"keystone-operator-controller-manager-7b4567c7cf-4pmmp\" (UID: \"245721bd-2bc5-4f42-ac45-5ae0b07cd77e\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.276271 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrzkw\" (UniqueName: \"kubernetes.io/projected/3f5477af-57e8-4a83-95ce-9fea4d62e797-kube-api-access-lrzkw\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.284370 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.303230 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w77vc\" (UniqueName: \"kubernetes.io/projected/4871d492-a015-4a2b-9f6a-62e15bfdb825-kube-api-access-w77vc\") pod \"neutron-operator-controller-manager-6fdcddb789-fz6rt\" (UID: \"4871d492-a015-4a2b-9f6a-62e15bfdb825\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.303275 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcv6\" (UniqueName: \"kubernetes.io/projected/d2d018b1-e591-4109-9b83-82bc60b2cb59-kube-api-access-6bcv6\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-kdz89\" (UID: \"d2d018b1-e591-4109-9b83-82bc60b2cb59\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.303297 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzrk8\" (UniqueName: \"kubernetes.io/projected/a9444b3d-85c5-4f44-953d-65a4dd2f30f2-kube-api-access-mzrk8\") pod \"octavia-operator-controller-manager-64cdc6ff96-f77r5\" (UID: \"a9444b3d-85c5-4f44-953d-65a4dd2f30f2\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.303322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjk8p\" (UniqueName: \"kubernetes.io/projected/042d3c47-fa72-4e2f-a127-2885c81ec7e4-kube-api-access-mjk8p\") pod \"manila-operator-controller-manager-5d499bf58b-mwk52\" (UID: \"042d3c47-fa72-4e2f-a127-2885c81ec7e4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.309571 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.310676 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.313761 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tk6nr" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.333016 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.333940 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.336778 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcv6\" (UniqueName: \"kubernetes.io/projected/d2d018b1-e591-4109-9b83-82bc60b2cb59-kube-api-access-6bcv6\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-kdz89\" (UID: \"d2d018b1-e591-4109-9b83-82bc60b2cb59\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.351802 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.355066 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjk8p\" (UniqueName: \"kubernetes.io/projected/042d3c47-fa72-4e2f-a127-2885c81ec7e4-kube-api-access-mjk8p\") pod \"manila-operator-controller-manager-5d499bf58b-mwk52\" (UID: \"042d3c47-fa72-4e2f-a127-2885c81ec7e4\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.355119 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.356146 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.360335 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-dnzj6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.372147 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.381139 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.403098 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-bcbhj" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.403284 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.406180 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w77vc\" (UniqueName: \"kubernetes.io/projected/4871d492-a015-4a2b-9f6a-62e15bfdb825-kube-api-access-w77vc\") pod \"neutron-operator-controller-manager-6fdcddb789-fz6rt\" (UID: \"4871d492-a015-4a2b-9f6a-62e15bfdb825\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.406234 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzrk8\" (UniqueName: \"kubernetes.io/projected/a9444b3d-85c5-4f44-953d-65a4dd2f30f2-kube-api-access-mzrk8\") pod \"octavia-operator-controller-manager-64cdc6ff96-f77r5\" (UID: \"a9444b3d-85c5-4f44-953d-65a4dd2f30f2\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.406267 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8txz\" (UniqueName: \"kubernetes.io/projected/801bb8d6-c107-48ad-b985-62e932b38992-kube-api-access-f8txz\") pod \"nova-operator-controller-manager-79556f57fc-hwmcq\" (UID: \"801bb8d6-c107-48ad-b985-62e932b38992\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.406305 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hhp\" (UniqueName: \"kubernetes.io/projected/f0d92863-0f89-415d-b4a3-24e09fb4ec02-kube-api-access-h4hhp\") pod \"ovn-operator-controller-manager-56897c768d-qtcjk\" (UID: \"f0d92863-0f89-415d-b4a3-24e09fb4ec02\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.434899 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.447058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzrk8\" (UniqueName: \"kubernetes.io/projected/a9444b3d-85c5-4f44-953d-65a4dd2f30f2-kube-api-access-mzrk8\") pod \"octavia-operator-controller-manager-64cdc6ff96-f77r5\" (UID: \"a9444b3d-85c5-4f44-953d-65a4dd2f30f2\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.475455 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w77vc\" (UniqueName: \"kubernetes.io/projected/4871d492-a015-4a2b-9f6a-62e15bfdb825-kube-api-access-w77vc\") pod \"neutron-operator-controller-manager-6fdcddb789-fz6rt\" (UID: \"4871d492-a015-4a2b-9f6a-62e15bfdb825\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.476357 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.477339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.479253 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-brn6l" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.495554 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.499099 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507396 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507408 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpbzh\" (UniqueName: \"kubernetes.io/projected/a1c5873f-0d08-4f51-aa91-822fc86a33e3-kube-api-access-tpbzh\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507482 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bd7\" (UniqueName: \"kubernetes.io/projected/5d9f654a-a223-4b91-93fd-301807c6f29a-kube-api-access-x5bd7\") pod \"placement-operator-controller-manager-57988cc5b5-4csc4\" (UID: \"5d9f654a-a223-4b91-93fd-301807c6f29a\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507526 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8txz\" (UniqueName: \"kubernetes.io/projected/801bb8d6-c107-48ad-b985-62e932b38992-kube-api-access-f8txz\") pod \"nova-operator-controller-manager-79556f57fc-hwmcq\" (UID: \"801bb8d6-c107-48ad-b985-62e932b38992\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507552 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.507571 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hhp\" (UniqueName: \"kubernetes.io/projected/f0d92863-0f89-415d-b4a3-24e09fb4ec02-kube-api-access-h4hhp\") pod \"ovn-operator-controller-manager-56897c768d-qtcjk\" (UID: \"f0d92863-0f89-415d-b4a3-24e09fb4ec02\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.516558 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.517597 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.528472 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9qv52" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.543136 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hhp\" (UniqueName: \"kubernetes.io/projected/f0d92863-0f89-415d-b4a3-24e09fb4ec02-kube-api-access-h4hhp\") pod \"ovn-operator-controller-manager-56897c768d-qtcjk\" (UID: \"f0d92863-0f89-415d-b4a3-24e09fb4ec02\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.546141 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8txz\" (UniqueName: \"kubernetes.io/projected/801bb8d6-c107-48ad-b985-62e932b38992-kube-api-access-f8txz\") pod \"nova-operator-controller-manager-79556f57fc-hwmcq\" (UID: \"801bb8d6-c107-48ad-b985-62e932b38992\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.554049 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.565181 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.565764 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.583129 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.584118 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.587242 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dzwd2" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.603849 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.611477 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5bd7\" (UniqueName: \"kubernetes.io/projected/5d9f654a-a223-4b91-93fd-301807c6f29a-kube-api-access-x5bd7\") pod \"placement-operator-controller-manager-57988cc5b5-4csc4\" (UID: \"5d9f654a-a223-4b91-93fd-301807c6f29a\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.611535 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76bvh\" (UniqueName: \"kubernetes.io/projected/d11a80a8-9bba-491e-aa38-e93e59c3343e-kube-api-access-76bvh\") pod \"telemetry-operator-controller-manager-76cc84c6bb-rvkg2\" (UID: \"d11a80a8-9bba-491e-aa38-e93e59c3343e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.611572 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.611588 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm9wb\" (UniqueName: \"kubernetes.io/projected/cfa54a97-6210-4566-bf61-c0c7720ec0ec-kube-api-access-zm9wb\") pod \"swift-operator-controller-manager-d77b94747-q4wgd\" (UID: \"cfa54a97-6210-4566-bf61-c0c7720ec0ec\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.611637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpbzh\" (UniqueName: \"kubernetes.io/projected/a1c5873f-0d08-4f51-aa91-822fc86a33e3-kube-api-access-tpbzh\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.612042 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.612098 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert podName:a1c5873f-0d08-4f51-aa91-822fc86a33e3 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:47.112083653 +0000 UTC m=+869.701339223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" (UID: "a1c5873f-0d08-4f51-aa91-822fc86a33e3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.630877 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.642492 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpbzh\" (UniqueName: \"kubernetes.io/projected/a1c5873f-0d08-4f51-aa91-822fc86a33e3-kube-api-access-tpbzh\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.642815 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.648550 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5bd7\" (UniqueName: \"kubernetes.io/projected/5d9f654a-a223-4b91-93fd-301807c6f29a-kube-api-access-x5bd7\") pod \"placement-operator-controller-manager-57988cc5b5-4csc4\" (UID: \"5d9f654a-a223-4b91-93fd-301807c6f29a\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.666841 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.667958 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.674848 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-ls7rf" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.706895 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.739603 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.739749 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm9wb\" (UniqueName: \"kubernetes.io/projected/cfa54a97-6210-4566-bf61-c0c7720ec0ec-kube-api-access-zm9wb\") pod \"swift-operator-controller-manager-d77b94747-q4wgd\" (UID: \"cfa54a97-6210-4566-bf61-c0c7720ec0ec\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.739800 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.739866 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwzr\" (UniqueName: \"kubernetes.io/projected/e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac-kube-api-access-jlwzr\") pod \"test-operator-controller-manager-5cd6c7f4c8-4vhhh\" (UID: \"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.739905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76bvh\" (UniqueName: \"kubernetes.io/projected/d11a80a8-9bba-491e-aa38-e93e59c3343e-kube-api-access-76bvh\") pod \"telemetry-operator-controller-manager-76cc84c6bb-rvkg2\" (UID: \"d11a80a8-9bba-491e-aa38-e93e59c3343e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.740220 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: E1128 06:35:46.740282 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert podName:3f5477af-57e8-4a83-95ce-9fea4d62e797 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:47.740254733 +0000 UTC m=+870.329510303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert") pod "infra-operator-controller-manager-57548d458d-rtv6s" (UID: "3f5477af-57e8-4a83-95ce-9fea4d62e797") : secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.758617 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.761720 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.765546 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-jg5j8" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.766278 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm9wb\" (UniqueName: \"kubernetes.io/projected/cfa54a97-6210-4566-bf61-c0c7720ec0ec-kube-api-access-zm9wb\") pod \"swift-operator-controller-manager-d77b94747-q4wgd\" (UID: \"cfa54a97-6210-4566-bf61-c0c7720ec0ec\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.766318 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76bvh\" (UniqueName: \"kubernetes.io/projected/d11a80a8-9bba-491e-aa38-e93e59c3343e-kube-api-access-76bvh\") pod \"telemetry-operator-controller-manager-76cc84c6bb-rvkg2\" (UID: \"d11a80a8-9bba-491e-aa38-e93e59c3343e\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.789003 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.820220 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.842068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77x6\" (UniqueName: \"kubernetes.io/projected/0811185a-c49e-4a81-b6d7-c786f590177b-kube-api-access-n77x6\") pod \"watcher-operator-controller-manager-656dcb59d4-mmqbs\" (UID: \"0811185a-c49e-4a81-b6d7-c786f590177b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.842125 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwzr\" (UniqueName: \"kubernetes.io/projected/e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac-kube-api-access-jlwzr\") pod \"test-operator-controller-manager-5cd6c7f4c8-4vhhh\" (UID: \"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.845192 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.849755 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.851579 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.852130 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.853968 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-brjtz" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.854675 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.854698 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.868145 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.869075 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.877882 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.878214 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-2qh5m" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.887376 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwzr\" (UniqueName: \"kubernetes.io/projected/e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac-kube-api-access-jlwzr\") pod \"test-operator-controller-manager-5cd6c7f4c8-4vhhh\" (UID: \"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.901824 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb"] Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.916234 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.943770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.943847 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sstgt\" (UniqueName: \"kubernetes.io/projected/5daef806-96c3-439c-85f9-f1ef27a8be0d-kube-api-access-sstgt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n7jmb\" (UID: \"5daef806-96c3-439c-85f9-f1ef27a8be0d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.943888 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n77x6\" (UniqueName: \"kubernetes.io/projected/0811185a-c49e-4a81-b6d7-c786f590177b-kube-api-access-n77x6\") pod \"watcher-operator-controller-manager-656dcb59d4-mmqbs\" (UID: \"0811185a-c49e-4a81-b6d7-c786f590177b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.943906 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.943958 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrpf\" (UniqueName: \"kubernetes.io/projected/84a07034-e21d-4e5b-a6ef-ba76d30b662a-kube-api-access-qnrpf\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:46 crc kubenswrapper[4955]: I1128 06:35:46.960134 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n77x6\" (UniqueName: \"kubernetes.io/projected/0811185a-c49e-4a81-b6d7-c786f590177b-kube-api-access-n77x6\") pod \"watcher-operator-controller-manager-656dcb59d4-mmqbs\" (UID: \"0811185a-c49e-4a81-b6d7-c786f590177b\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.009157 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.045517 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.046118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sstgt\" (UniqueName: \"kubernetes.io/projected/5daef806-96c3-439c-85f9-f1ef27a8be0d-kube-api-access-sstgt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n7jmb\" (UID: \"5daef806-96c3-439c-85f9-f1ef27a8be0d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.046217 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.046304 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrpf\" (UniqueName: \"kubernetes.io/projected/84a07034-e21d-4e5b-a6ef-ba76d30b662a-kube-api-access-qnrpf\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.046052 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.046746 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:47.546731141 +0000 UTC m=+870.135986711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.047094 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.047192 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:47.547181974 +0000 UTC m=+870.136437544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "metrics-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.066432 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrpf\" (UniqueName: \"kubernetes.io/projected/84a07034-e21d-4e5b-a6ef-ba76d30b662a-kube-api-access-qnrpf\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.068736 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sstgt\" (UniqueName: \"kubernetes.io/projected/5daef806-96c3-439c-85f9-f1ef27a8be0d-kube-api-access-sstgt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-n7jmb\" (UID: \"5daef806-96c3-439c-85f9-f1ef27a8be0d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.098008 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.120255 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.156624 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.156868 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.156923 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert podName:a1c5873f-0d08-4f51-aa91-822fc86a33e3 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:48.156904982 +0000 UTC m=+870.746160552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" (UID: "a1c5873f-0d08-4f51-aa91-822fc86a33e3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.213440 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.264707 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.383091 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-8cmf7"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.396201 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv"] Nov 28 06:35:47 crc kubenswrapper[4955]: W1128 06:35:47.402777 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84c6c0d5_d427_471a_8a54_9d3fc28264bc.slice/crio-523f40e596ab2a67ae06369e9edc5b6871f90718101a290c690efc0d78773b6d WatchSource:0}: Error finding container 523f40e596ab2a67ae06369e9edc5b6871f90718101a290c690efc0d78773b6d: Status 404 returned error can't find the container with id 523f40e596ab2a67ae06369e9edc5b6871f90718101a290c690efc0d78773b6d Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.432964 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" event={"ID":"8bcb6097-d2d8-4190-afbd-644daa5ce7b6","Type":"ContainerStarted","Data":"a6872034560055b866b63925e7a3792090ceede0706d3686bc016bb44519a4e3"} Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.437679 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" event={"ID":"84c6c0d5-d427-471a-8a54-9d3fc28264bc","Type":"ContainerStarted","Data":"523f40e596ab2a67ae06369e9edc5b6871f90718101a290c690efc0d78773b6d"} Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.442992 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" event={"ID":"ef549437-6bef-428a-991f-b38cc613ec1e","Type":"ContainerStarted","Data":"4ef3e8fa1097fb93defb3d52e2cb30f7ff0b43c353ee8cdd6efd3650ee9d1342"} Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.445889 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" event={"ID":"813a8c4e-06bd-467e-9b80-0e3e88fb361a","Type":"ContainerStarted","Data":"b278c2d385862e721acb78f4b2b1840ce0f6b0df393ac4afd73e956c343951b6"} Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.450585 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" event={"ID":"3e51ea77-cbc1-4ebd-9247-335d93211353","Type":"ContainerStarted","Data":"092196deb6b1422a576ebb2755ef8e4a77984a007a55189925715c2d2592287a"} Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.562206 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.562312 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.562416 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.562477 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.562540 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:48.56248524 +0000 UTC m=+871.151740800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "metrics-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.562561 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:48.562551502 +0000 UTC m=+871.151807202 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.662396 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq"] Nov 28 06:35:47 crc kubenswrapper[4955]: W1128 06:35:47.673007 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c0d9ce_4c16_451d_948b_75ae7bbca487.slice/crio-551945879d9895dcc08c8e7d0a7b5f5a52f316c533a3ab8beef090c787a5a51b WatchSource:0}: Error finding container 551945879d9895dcc08c8e7d0a7b5f5a52f316c533a3ab8beef090c787a5a51b: Status 404 returned error can't find the container with id 551945879d9895dcc08c8e7d0a7b5f5a52f316c533a3ab8beef090c787a5a51b Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.681083 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.698887 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.751949 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.752250 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.752261 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.752270 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.764817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.764931 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.764970 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert podName:3f5477af-57e8-4a83-95ce-9fea4d62e797 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:49.764956835 +0000 UTC m=+872.354212405 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert") pod "infra-operator-controller-manager-57548d458d-rtv6s" (UID: "3f5477af-57e8-4a83-95ce-9fea4d62e797") : secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:47 crc kubenswrapper[4955]: W1128 06:35:47.771071 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d92863_0f89_415d_b4a3_24e09fb4ec02.slice/crio-e85aab1acb84229431f8b515635e95e540bd2d103567f30f4ceaa9622b35e4f0 WatchSource:0}: Error finding container e85aab1acb84229431f8b515635e95e540bd2d103567f30f4ceaa9622b35e4f0: Status 404 returned error can't find the container with id e85aab1acb84229431f8b515635e95e540bd2d103567f30f4ceaa9622b35e4f0 Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.771319 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm9wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-q4wgd_openstack-operators(cfa54a97-6210-4566-bf61-c0c7720ec0ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.775428 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm9wb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-q4wgd_openstack-operators(cfa54a97-6210-4566-bf61-c0c7720ec0ec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.775557 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w77vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-fz6rt_openstack-operators(4871d492-a015-4a2b-9f6a-62e15bfdb825): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.775665 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlwzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-4vhhh_openstack-operators(e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.775726 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5bd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-4csc4_openstack-operators(5d9f654a-a223-4b91-93fd-301807c6f29a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.776607 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" podUID="cfa54a97-6210-4566-bf61-c0c7720ec0ec" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.780766 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5bd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-4csc4_openstack-operators(5d9f654a-a223-4b91-93fd-301807c6f29a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.780985 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4hhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-qtcjk_openstack-operators(f0d92863-0f89-415d-b4a3-24e09fb4ec02): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.781866 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.782046 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlwzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-4vhhh_openstack-operators(e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.782435 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w77vc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-6fdcddb789-fz6rt_openstack-operators(4871d492-a015-4a2b-9f6a-62e15bfdb825): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.783056 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9"] Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.783592 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" podUID="4871d492-a015-4a2b-9f6a-62e15bfdb825" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.783528 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4hhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-qtcjk_openstack-operators(f0d92863-0f89-415d-b4a3-24e09fb4ec02): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.783840 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.785767 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.816177 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n77x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-mmqbs_openstack-operators(0811185a-c49e-4a81-b6d7-c786f590177b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.818187 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n77x6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-mmqbs_openstack-operators(0811185a-c49e-4a81-b6d7-c786f590177b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:35:47 crc kubenswrapper[4955]: E1128 06:35:47.820533 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" podUID="0811185a-c49e-4a81-b6d7-c786f590177b" Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.822156 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.830385 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.839632 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.844095 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.850646 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.858217 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh"] Nov 28 06:35:47 crc kubenswrapper[4955]: I1128 06:35:47.861928 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs"] Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.169318 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.169657 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.169717 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert podName:a1c5873f-0d08-4f51-aa91-822fc86a33e3 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:50.16969767 +0000 UTC m=+872.758953240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" (UID: "a1c5873f-0d08-4f51-aa91-822fc86a33e3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.462580 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" event={"ID":"d2d018b1-e591-4109-9b83-82bc60b2cb59","Type":"ContainerStarted","Data":"27b7e54bf960822c96019e4c94a33379708f83fab20a18a0ff7ca63d8c4538de"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.464641 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" event={"ID":"5daef806-96c3-439c-85f9-f1ef27a8be0d","Type":"ContainerStarted","Data":"70ac86f39f1da77290ba2b218ce01138e8ad79e67c1a2bc8294d01a196b9fe5d"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.466345 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" event={"ID":"d8ca8a28-b011-4a61-b37d-5f84543d63bb","Type":"ContainerStarted","Data":"5cf4b69c5821814e6f1f8916c2dc4ab1f8f10c87abbbe25e42455834e8d7d739"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.467403 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" event={"ID":"f0d92863-0f89-415d-b4a3-24e09fb4ec02","Type":"ContainerStarted","Data":"e85aab1acb84229431f8b515635e95e540bd2d103567f30f4ceaa9622b35e4f0"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.469735 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" event={"ID":"801bb8d6-c107-48ad-b985-62e932b38992","Type":"ContainerStarted","Data":"b01bc1dc389da00b0785840f0e16ff9bf658377d6f504fe510b5725651ab6e45"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.471576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" event={"ID":"4871d492-a015-4a2b-9f6a-62e15bfdb825","Type":"ContainerStarted","Data":"cbd01acdf04ef1550b0643ccfcff001238ea30b57ee65ea4565cda78b9b5d356"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.474559 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" event={"ID":"042d3c47-fa72-4e2f-a127-2885c81ec7e4","Type":"ContainerStarted","Data":"76395bb510e87b40da64ba7961e8ac40b73750552de0d0fc471e2429d2706f14"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.475835 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" event={"ID":"0811185a-c49e-4a81-b6d7-c786f590177b","Type":"ContainerStarted","Data":"fa8bf90a32cb024fe080750c5ba5c786df91d0137ae616720b124c58d4a08313"} Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.479351 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" podUID="0811185a-c49e-4a81-b6d7-c786f590177b" Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.479488 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.479810 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" podUID="4871d492-a015-4a2b-9f6a-62e15bfdb825" Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.481800 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" event={"ID":"245721bd-2bc5-4f42-ac45-5ae0b07cd77e","Type":"ContainerStarted","Data":"8ba8d0d4a8623168c4cbba068080de76f86f016e7ef471c466bcfee50ed30b2b"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.486313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" event={"ID":"a9444b3d-85c5-4f44-953d-65a4dd2f30f2","Type":"ContainerStarted","Data":"41b04f49daf7bf36529b6c8751a69ce05a2f98eb97f94cda07ad48fb85756d49"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.487610 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" event={"ID":"5d9f654a-a223-4b91-93fd-301807c6f29a","Type":"ContainerStarted","Data":"a3c2ab6e6605bd8c86324effea8da12fd610775cab9f9460b8553b0b15bf7c7d"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.489396 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" event={"ID":"cfa54a97-6210-4566-bf61-c0c7720ec0ec","Type":"ContainerStarted","Data":"85664f244fbfba68bf5af2e79e9e7d0677f93daac43b14f20645629961ec7e3d"} Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.489557 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.492054 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" event={"ID":"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac","Type":"ContainerStarted","Data":"9efc3987b8d047ec33c1cd0c07969bcca9984853074fad71e9ef7530402769c9"} Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.492907 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" podUID="cfa54a97-6210-4566-bf61-c0c7720ec0ec" Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.493565 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.494529 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" event={"ID":"d11a80a8-9bba-491e-aa38-e93e59c3343e","Type":"ContainerStarted","Data":"c5e08ba8201db761c50fbab197fb951aec5614b945789c3402aaf7a944d851cf"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.499720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" event={"ID":"d2c0d9ce-4c16-451d-948b-75ae7bbca487","Type":"ContainerStarted","Data":"551945879d9895dcc08c8e7d0a7b5f5a52f316c533a3ab8beef090c787a5a51b"} Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.573639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:48 crc kubenswrapper[4955]: I1128 06:35:48.573761 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.574448 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.574535 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:50.574516185 +0000 UTC m=+873.163771765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "metrics-server-cert" not found Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.574603 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 06:35:48 crc kubenswrapper[4955]: E1128 06:35:48.574647 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:50.574633659 +0000 UTC m=+873.163889229 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "webhook-server-cert" not found Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.507435 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" podUID="cfa54a97-6210-4566-bf61-c0c7720ec0ec" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.507983 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.508030 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.508249 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.508314 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" podUID="0811185a-c49e-4a81-b6d7-c786f590177b" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.508525 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e00a9ed0ab26c5b745bd804ab1fe6b22428d026f17ea05a05f045e060342f46c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" podUID="4871d492-a015-4a2b-9f6a-62e15bfdb825" Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.794026 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:49 crc kubenswrapper[4955]: E1128 06:35:49.794087 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert podName:3f5477af-57e8-4a83-95ce-9fea4d62e797 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:53.794072212 +0000 UTC m=+876.383327782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert") pod "infra-operator-controller-manager-57548d458d-rtv6s" (UID: "3f5477af-57e8-4a83-95ce-9fea4d62e797") : secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:49 crc kubenswrapper[4955]: I1128 06:35:49.796810 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:50 crc kubenswrapper[4955]: I1128 06:35:50.202558 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.202806 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.202912 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert podName:a1c5873f-0d08-4f51-aa91-822fc86a33e3 nodeName:}" failed. No retries permitted until 2025-11-28 06:35:54.202886064 +0000 UTC m=+876.792141674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" (UID: "a1c5873f-0d08-4f51-aa91-822fc86a33e3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:50 crc kubenswrapper[4955]: I1128 06:35:50.608646 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:50 crc kubenswrapper[4955]: I1128 06:35:50.608740 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.608849 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.608936 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:54.608913536 +0000 UTC m=+877.198169106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "webhook-server-cert" not found Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.608857 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 06:35:50 crc kubenswrapper[4955]: E1128 06:35:50.609239 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:35:54.609231315 +0000 UTC m=+877.198486885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "metrics-server-cert" not found Nov 28 06:35:53 crc kubenswrapper[4955]: I1128 06:35:53.392898 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:35:53 crc kubenswrapper[4955]: I1128 06:35:53.393370 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:35:53 crc kubenswrapper[4955]: I1128 06:35:53.796179 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:35:53 crc kubenswrapper[4955]: E1128 06:35:53.796376 4955 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:53 crc kubenswrapper[4955]: E1128 06:35:53.796490 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert podName:3f5477af-57e8-4a83-95ce-9fea4d62e797 nodeName:}" failed. No retries permitted until 2025-11-28 06:36:01.796468555 +0000 UTC m=+884.385724135 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert") pod "infra-operator-controller-manager-57548d458d-rtv6s" (UID: "3f5477af-57e8-4a83-95ce-9fea4d62e797") : secret "infra-operator-webhook-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: I1128 06:35:54.205637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.206156 4955 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.206273 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert podName:a1c5873f-0d08-4f51-aa91-822fc86a33e3 nodeName:}" failed. No retries permitted until 2025-11-28 06:36:02.206258055 +0000 UTC m=+884.795513625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" (UID: "a1c5873f-0d08-4f51-aa91-822fc86a33e3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: I1128 06:35:54.614219 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:54 crc kubenswrapper[4955]: I1128 06:35:54.614626 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.614732 4955 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.614774 4955 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.614785 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:36:02.614769388 +0000 UTC m=+885.204024958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "metrics-server-cert" not found Nov 28 06:35:54 crc kubenswrapper[4955]: E1128 06:35:54.614801 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs podName:84a07034-e21d-4e5b-a6ef-ba76d30b662a nodeName:}" failed. No retries permitted until 2025-11-28 06:36:02.614793059 +0000 UTC m=+885.204048629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs") pod "openstack-operator-controller-manager-bd7f7485b-zbpwx" (UID: "84a07034-e21d-4e5b-a6ef-ba76d30b662a") : secret "webhook-server-cert" not found Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.752671 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.756834 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.765638 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.953068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb982\" (UniqueName: \"kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.953116 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:56 crc kubenswrapper[4955]: I1128 06:35:56.953737 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.055022 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.055138 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb982\" (UniqueName: \"kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.055156 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.055613 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.055923 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.080264 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb982\" (UniqueName: \"kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982\") pod \"community-operators-gtp9v\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:35:57 crc kubenswrapper[4955]: I1128 06:35:57.087725 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:00 crc kubenswrapper[4955]: E1128 06:36:00.766172 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 28 06:36:00 crc kubenswrapper[4955]: E1128 06:36:00.766826 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sstgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-n7jmb_openstack-operators(5daef806-96c3-439c-85f9-f1ef27a8be0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:00 crc kubenswrapper[4955]: E1128 06:36:00.768039 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" podUID="5daef806-96c3-439c-85f9-f1ef27a8be0d" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.380121 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.380285 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-77lpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-4pmmp_openstack-operators(245721bd-2bc5-4f42-ac45-5ae0b07cd77e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:01 crc kubenswrapper[4955]: I1128 06:36:01.590691 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.604805 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" podUID="5daef806-96c3-439c-85f9-f1ef27a8be0d" Nov 28 06:36:01 crc kubenswrapper[4955]: W1128 06:36:01.619813 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda061ef17_3f7b_4a7d_8684_af18c9319228.slice/crio-b9c97b306cbb54d7bc5ddc5367ed7d2210b76dbc5a13e9930fc3da32734f784c WatchSource:0}: Error finding container b9c97b306cbb54d7bc5ddc5367ed7d2210b76dbc5a13e9930fc3da32734f784c: Status 404 returned error can't find the container with id b9c97b306cbb54d7bc5ddc5367ed7d2210b76dbc5a13e9930fc3da32734f784c Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.787040 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psdm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-589cbd6b5b-jtxkh_openstack-operators(8bcb6097-d2d8-4190-afbd-644daa5ce7b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.788492 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" podUID="8bcb6097-d2d8-4190-afbd-644daa5ce7b6" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.801925 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tpv69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5d494799bf-c7pkv_openstack-operators(84c6c0d5-d427-471a-8a54-9d3fc28264bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.803003 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" podUID="84c6c0d5-d427-471a-8a54-9d3fc28264bc" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.820119 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8txz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-hwmcq_openstack-operators(801bb8d6-c107-48ad-b985-62e932b38992): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 06:36:01 crc kubenswrapper[4955]: E1128 06:36:01.824140 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" podUID="801bb8d6-c107-48ad-b985-62e932b38992" Nov 28 06:36:01 crc kubenswrapper[4955]: I1128 06:36:01.824475 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:36:01 crc kubenswrapper[4955]: I1128 06:36:01.831996 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3f5477af-57e8-4a83-95ce-9fea4d62e797-cert\") pod \"infra-operator-controller-manager-57548d458d-rtv6s\" (UID: \"3f5477af-57e8-4a83-95ce-9fea4d62e797\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:36:01 crc kubenswrapper[4955]: I1128 06:36:01.859938 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.234153 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.248410 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1c5873f-0d08-4f51-aa91-822fc86a33e3-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6\" (UID: \"a1c5873f-0d08-4f51-aa91-822fc86a33e3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.357380 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.462034 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s"] Nov 28 06:36:02 crc kubenswrapper[4955]: W1128 06:36:02.471417 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f5477af_57e8_4a83_95ce_9fea4d62e797.slice/crio-1b160d46beac2fb8b2a07aaef0d1271acf5b0f19eb7ac794d6a95754a2b7ef1c WatchSource:0}: Error finding container 1b160d46beac2fb8b2a07aaef0d1271acf5b0f19eb7ac794d6a95754a2b7ef1c: Status 404 returned error can't find the container with id 1b160d46beac2fb8b2a07aaef0d1271acf5b0f19eb7ac794d6a95754a2b7ef1c Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.634209 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" event={"ID":"3e51ea77-cbc1-4ebd-9247-335d93211353","Type":"ContainerStarted","Data":"8a91c9a753ff0e947043fb6b7a91347e2ab859d2a061d3fcb1c31d2e20d6b06e"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.639491 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" event={"ID":"d2d018b1-e591-4109-9b83-82bc60b2cb59","Type":"ContainerStarted","Data":"70c92b9428bfe8d1fab8c3343cafe8d6eae490dc2f1f712c73fe74c1dd071b6c"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.642045 4955 generic.go:334] "Generic (PLEG): container finished" podID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerID="0c412c329667d12003864e312b3e8da7a10042a57cf129944191914b00886bda" exitCode=0 Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.642104 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerDied","Data":"0c412c329667d12003864e312b3e8da7a10042a57cf129944191914b00886bda"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.642126 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerStarted","Data":"b9c97b306cbb54d7bc5ddc5367ed7d2210b76dbc5a13e9930fc3da32734f784c"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.642388 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.642487 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.644763 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" event={"ID":"d11a80a8-9bba-491e-aa38-e93e59c3343e","Type":"ContainerStarted","Data":"7fc3935075d4c3d3ee606ba54cd7cf53cb9364665456916d6d3279c471584076"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.646196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-webhook-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.647732 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" event={"ID":"3f5477af-57e8-4a83-95ce-9fea4d62e797","Type":"ContainerStarted","Data":"1b160d46beac2fb8b2a07aaef0d1271acf5b0f19eb7ac794d6a95754a2b7ef1c"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.655603 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" event={"ID":"84c6c0d5-d427-471a-8a54-9d3fc28264bc","Type":"ContainerStarted","Data":"90d99017e88446e44b44dc563837f997775ad8285b661041d4da55e3b78e9ec6"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.656283 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:36:02 crc kubenswrapper[4955]: E1128 06:36:02.656851 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" podUID="84c6c0d5-d427-471a-8a54-9d3fc28264bc" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.664297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/84a07034-e21d-4e5b-a6ef-ba76d30b662a-metrics-certs\") pod \"openstack-operator-controller-manager-bd7f7485b-zbpwx\" (UID: \"84a07034-e21d-4e5b-a6ef-ba76d30b662a\") " pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.664311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" event={"ID":"d2c0d9ce-4c16-451d-948b-75ae7bbca487","Type":"ContainerStarted","Data":"3ea1c59c8517c20fa791038f69804948f1d886257e37299a653edaea7758d9fd"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.683838 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" event={"ID":"8bcb6097-d2d8-4190-afbd-644daa5ce7b6","Type":"ContainerStarted","Data":"99398176a4793a0571e1a58e9282a88e93156415af44d2b86d3f25355c0549c5"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.683938 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:36:02 crc kubenswrapper[4955]: E1128 06:36:02.689950 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" podUID="8bcb6097-d2d8-4190-afbd-644daa5ce7b6" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.727042 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" event={"ID":"ef549437-6bef-428a-991f-b38cc613ec1e","Type":"ContainerStarted","Data":"03acc16aa70a24fabb567dc5042c528d4543b9984ced251ec23e55eb73a75b51"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.770852 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" event={"ID":"d8ca8a28-b011-4a61-b37d-5f84543d63bb","Type":"ContainerStarted","Data":"6633cf95f1b956de3be3f1865894bdf3bba97872c1243c988e76bfea8c315413"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.775653 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" event={"ID":"042d3c47-fa72-4e2f-a127-2885c81ec7e4","Type":"ContainerStarted","Data":"0b7e1c720497be247415eddef2b465416eb2880711ea747bfcc6e721dae75227"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.781015 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" event={"ID":"a9444b3d-85c5-4f44-953d-65a4dd2f30f2","Type":"ContainerStarted","Data":"75fb14f2c4f7967ed216c0647f72ea6d5c83d8e0e26da6ee98a02f4c071c9daf"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.783586 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" event={"ID":"813a8c4e-06bd-467e-9b80-0e3e88fb361a","Type":"ContainerStarted","Data":"faf65a1c69e37bda58547dc75363235649b84fab3bb89b6aee3b259ce7660831"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.787404 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" event={"ID":"801bb8d6-c107-48ad-b985-62e932b38992","Type":"ContainerStarted","Data":"b645a0f47bdc72f394dc8f20b8a8fa19c61dbebb63d6a847280443cf7f3f7ed8"} Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.788607 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:36:02 crc kubenswrapper[4955]: E1128 06:36:02.802767 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" podUID="801bb8d6-c107-48ad-b985-62e932b38992" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.809544 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:02 crc kubenswrapper[4955]: I1128 06:36:02.924446 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6"] Nov 28 06:36:03 crc kubenswrapper[4955]: I1128 06:36:03.795707 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" event={"ID":"a1c5873f-0d08-4f51-aa91-822fc86a33e3","Type":"ContainerStarted","Data":"83925fcf239c588bd54dec055580b9de1c62ca078d8791544e1559f93676bf1e"} Nov 28 06:36:03 crc kubenswrapper[4955]: E1128 06:36:03.798548 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" podUID="801bb8d6-c107-48ad-b985-62e932b38992" Nov 28 06:36:03 crc kubenswrapper[4955]: E1128 06:36:03.799744 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" podUID="8bcb6097-d2d8-4190-afbd-644daa5ce7b6" Nov 28 06:36:03 crc kubenswrapper[4955]: E1128 06:36:03.799893 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" podUID="84c6c0d5-d427-471a-8a54-9d3fc28264bc" Nov 28 06:36:04 crc kubenswrapper[4955]: I1128 06:36:04.219959 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx"] Nov 28 06:36:05 crc kubenswrapper[4955]: W1128 06:36:05.165606 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a07034_e21d_4e5b_a6ef_ba76d30b662a.slice/crio-414d2c596e8c3cb1d8cda1a2707d347255b8b9f094743e699391ddd1b035ca45 WatchSource:0}: Error finding container 414d2c596e8c3cb1d8cda1a2707d347255b8b9f094743e699391ddd1b035ca45: Status 404 returned error can't find the container with id 414d2c596e8c3cb1d8cda1a2707d347255b8b9f094743e699391ddd1b035ca45 Nov 28 06:36:05 crc kubenswrapper[4955]: I1128 06:36:05.809783 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" event={"ID":"84a07034-e21d-4e5b-a6ef-ba76d30b662a","Type":"ContainerStarted","Data":"414d2c596e8c3cb1d8cda1a2707d347255b8b9f094743e699391ddd1b035ca45"} Nov 28 06:36:06 crc kubenswrapper[4955]: I1128 06:36:06.187685 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" Nov 28 06:36:06 crc kubenswrapper[4955]: E1128 06:36:06.190887 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" podUID="8bcb6097-d2d8-4190-afbd-644daa5ce7b6" Nov 28 06:36:06 crc kubenswrapper[4955]: I1128 06:36:06.238365 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" Nov 28 06:36:06 crc kubenswrapper[4955]: E1128 06:36:06.240784 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" podUID="84c6c0d5-d427-471a-8a54-9d3fc28264bc" Nov 28 06:36:06 crc kubenswrapper[4955]: I1128 06:36:06.645574 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" Nov 28 06:36:06 crc kubenswrapper[4955]: E1128 06:36:06.647034 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" podUID="801bb8d6-c107-48ad-b985-62e932b38992" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:21.774812 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:21.775679 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jlwzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-4vhhh_openstack-operators(e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:24 crc kubenswrapper[4955]: I1128 06:36:23.393283 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:36:24 crc kubenswrapper[4955]: I1128 06:36:23.393343 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.583276 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.583756 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4hhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-qtcjk_openstack-operators(f0d92863-0f89-415d-b4a3-24e09fb4ec02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.595767 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.595893 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9nwgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7b64f4fb85-5kt75_openstack-operators(3e51ea77-cbc1-4ebd-9247-335d93211353): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.597041 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" podUID="3e51ea77-cbc1-4ebd-9247-335d93211353" Nov 28 06:36:24 crc kubenswrapper[4955]: I1128 06:36:24.980731 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:36:24 crc kubenswrapper[4955]: E1128 06:36:24.980834 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" podUID="3e51ea77-cbc1-4ebd-9247-335d93211353" Nov 28 06:36:24 crc kubenswrapper[4955]: I1128 06:36:24.981238 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" Nov 28 06:36:25 crc kubenswrapper[4955]: E1128 06:36:25.091836 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423" Nov 28 06:36:25 crc kubenswrapper[4955]: E1128 06:36:25.092077 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x5bd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-57988cc5b5-4csc4_openstack-operators(5d9f654a-a223-4b91-93fd-301807c6f29a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:25 crc kubenswrapper[4955]: E1128 06:36:25.546643 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41" Nov 28 06:36:25 crc kubenswrapper[4955]: E1128 06:36:25.547067 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tpbzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6_openstack-operators(a1c5873f-0d08-4f51-aa91-822fc86a33e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:36:25 crc kubenswrapper[4955]: I1128 06:36:25.997975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" event={"ID":"4871d492-a015-4a2b-9f6a-62e15bfdb825","Type":"ContainerStarted","Data":"9da584fadef0b4a2e1dc2ac94dcd16a836d888fa539654e211d07ab574e55f33"} Nov 28 06:36:26 crc kubenswrapper[4955]: I1128 06:36:26.002763 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" event={"ID":"84a07034-e21d-4e5b-a6ef-ba76d30b662a","Type":"ContainerStarted","Data":"35c07c7bb4c59f98b658d5fbb1291f992ea0929ec9623789870fa05205967ad1"} Nov 28 06:36:26 crc kubenswrapper[4955]: I1128 06:36:26.002792 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:26 crc kubenswrapper[4955]: I1128 06:36:26.046029 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" podStartSLOduration=40.046004629 podStartE2EDuration="40.046004629s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:36:26.040398977 +0000 UTC m=+908.629654558" watchObservedRunningTime="2025-11-28 06:36:26.046004629 +0000 UTC m=+908.635260209" Nov 28 06:36:26 crc kubenswrapper[4955]: E1128 06:36:26.310824 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" podUID="a1c5873f-0d08-4f51-aa91-822fc86a33e3" Nov 28 06:36:26 crc kubenswrapper[4955]: E1128 06:36:26.669651 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" podUID="245721bd-2bc5-4f42-ac45-5ae0b07cd77e" Nov 28 06:36:26 crc kubenswrapper[4955]: E1128 06:36:26.860795 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:36:26 crc kubenswrapper[4955]: E1128 06:36:26.878786 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.008413 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" event={"ID":"d2d018b1-e591-4109-9b83-82bc60b2cb59","Type":"ContainerStarted","Data":"27ede6377dae0cba7735b5c5bc14323efa01d5a664034ea7f4dfa59097b5cf31"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.009466 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.011136 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" event={"ID":"801bb8d6-c107-48ad-b985-62e932b38992","Type":"ContainerStarted","Data":"937cd47053514be7f9ac7a389f535d8a2b1b9ab872c02de2207526489f4f26d7"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.011417 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.012885 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" event={"ID":"d11a80a8-9bba-491e-aa38-e93e59c3343e","Type":"ContainerStarted","Data":"907fefff97274a1d85bb8f1c3d6cb3211f123cbae9901110f8b91ba7fbaacdc4"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.013042 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.015096 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" event={"ID":"4871d492-a015-4a2b-9f6a-62e15bfdb825","Type":"ContainerStarted","Data":"6a0e1675b8b4d9207971a4c85acbe960a96042c8708ab8fc0b423e2e9817528e"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.015212 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.016848 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" event={"ID":"245721bd-2bc5-4f42-ac45-5ae0b07cd77e","Type":"ContainerStarted","Data":"029c8366f399f47dcc5f323d1259f1dd4b2fc6e08f0fe74541aa8e5c8dcd358d"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.018496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" event={"ID":"8bcb6097-d2d8-4190-afbd-644daa5ce7b6","Type":"ContainerStarted","Data":"6b47f268e4b2c073c756796f1a026fe85e56b97cafc9b53dfe7c4c9da67fd31a"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.020340 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" event={"ID":"84c6c0d5-d427-471a-8a54-9d3fc28264bc","Type":"ContainerStarted","Data":"864a2d3052f41d4c5314b47fa522668b85eed34c3b89a1e3b51c267c9cafa743"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.022181 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" event={"ID":"d2c0d9ce-4c16-451d-948b-75ae7bbca487","Type":"ContainerStarted","Data":"bd1f5819ec2360e876265b49f088f92ec19b85e4b9529939f16712ecda99b974"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.022329 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.023175 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" event={"ID":"0811185a-c49e-4a81-b6d7-c786f590177b","Type":"ContainerStarted","Data":"9adaec41bee8be868b64601bf329b24ff9568f9dc7d6a677638e7a60ab45a5f7"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.024607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" event={"ID":"042d3c47-fa72-4e2f-a127-2885c81ec7e4","Type":"ContainerStarted","Data":"212e6058f124b13942a1ef30ab7585a7b7a7ad333c8073085d77428b59a757ac"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.025165 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.026968 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.030099 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" event={"ID":"a9444b3d-85c5-4f44-953d-65a4dd2f30f2","Type":"ContainerStarted","Data":"8e378fb7b8111c08f18d4ead73e431c882a05796eeb6612f026edd57ff7cf462"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.030328 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.031639 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" event={"ID":"5d9f654a-a223-4b91-93fd-301807c6f29a","Type":"ContainerStarted","Data":"d91745ca8ff7ce13a0927e444c6d7f20000cfbc8faec1dc1c1bde4d9620eb6a3"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.032046 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" Nov 28 06:36:27 crc kubenswrapper[4955]: E1128 06:36:27.033127 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.037165 4955 generic.go:334] "Generic (PLEG): container finished" podID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerID="a4279158e450746f3702bf9f42015a6c2f0bb38d324b705611757ff51e714fa3" exitCode=0 Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.037215 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerDied","Data":"a4279158e450746f3702bf9f42015a6c2f0bb38d324b705611757ff51e714fa3"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.039597 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" event={"ID":"a1c5873f-0d08-4f51-aa91-822fc86a33e3","Type":"ContainerStarted","Data":"14a7696b9e5c7e47bbadffe04be1e8d47e9e650d7d182283834ae37bf2a58608"} Nov 28 06:36:27 crc kubenswrapper[4955]: E1128 06:36:27.040483 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" podUID="a1c5873f-0d08-4f51-aa91-822fc86a33e3" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.041648 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" event={"ID":"cfa54a97-6210-4566-bf61-c0c7720ec0ec","Type":"ContainerStarted","Data":"33a42689c43020c702227f0753e1ece030bb8ede9258ee814625e46a4a6c84f8"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.045789 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" event={"ID":"5daef806-96c3-439c-85f9-f1ef27a8be0d","Type":"ContainerStarted","Data":"94724c7d79f8694fdf86c70f87000c7426e480e3d93b7b405087aa03deb3836d"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.049605 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" event={"ID":"d8ca8a28-b011-4a61-b37d-5f84543d63bb","Type":"ContainerStarted","Data":"6287dd4290fa5824352ed95005fbc377f441f7b3a8cfc291e567c584a67367a6"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.051706 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.053266 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.062825 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.063446 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" event={"ID":"ef549437-6bef-428a-991f-b38cc613ec1e","Type":"ContainerStarted","Data":"8c163ba0528afdea202b9725741edaba816c6e35723a36c246b5561984ce4111"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.064697 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.070058 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.080554 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" event={"ID":"3f5477af-57e8-4a83-95ce-9fea4d62e797","Type":"ContainerStarted","Data":"c7a26a850505acd2e8fcb182a43d7c0205006468f4494024c3b2a96daa5a44a3"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.089896 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" event={"ID":"f0d92863-0f89-415d-b4a3-24e09fb4ec02","Type":"ContainerStarted","Data":"2f85ccc8fe84fa4a747d94ca4287a785c9dba345d33088f211eaefc65004da1c"} Nov 28 06:36:27 crc kubenswrapper[4955]: E1128 06:36:27.091070 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.096946 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" event={"ID":"813a8c4e-06bd-467e-9b80-0e3e88fb361a","Type":"ContainerStarted","Data":"986f4b30f6d5414fa2352709272c65adf55ebdba43a544f23b79cd2859c22b7f"} Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.097380 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.103387 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.131689 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-kdz89" podStartSLOduration=4.226228699 podStartE2EDuration="42.131675431s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.733408034 +0000 UTC m=+870.322663604" lastFinishedPulling="2025-11-28 06:36:25.638854776 +0000 UTC m=+908.228110336" observedRunningTime="2025-11-28 06:36:27.129855489 +0000 UTC m=+909.719111059" watchObservedRunningTime="2025-11-28 06:36:27.131675431 +0000 UTC m=+909.720931001" Nov 28 06:36:27 crc kubenswrapper[4955]: E1128 06:36:27.177867 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.198856 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-8cmf7" podStartSLOduration=3.879888072 podStartE2EDuration="42.19883498s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.418765171 +0000 UTC m=+870.008020741" lastFinishedPulling="2025-11-28 06:36:25.737712079 +0000 UTC m=+908.326967649" observedRunningTime="2025-11-28 06:36:27.154955223 +0000 UTC m=+909.744210803" watchObservedRunningTime="2025-11-28 06:36:27.19883498 +0000 UTC m=+909.788090550" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.202079 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-p5n27" podStartSLOduration=3.398225147 podStartE2EDuration="42.202070613s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:46.845563814 +0000 UTC m=+869.434819384" lastFinishedPulling="2025-11-28 06:36:25.64940927 +0000 UTC m=+908.238664850" observedRunningTime="2025-11-28 06:36:27.182037225 +0000 UTC m=+909.771292855" watchObservedRunningTime="2025-11-28 06:36:27.202070613 +0000 UTC m=+909.791326183" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.229202 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-rvkg2" podStartSLOduration=3.269404237 podStartE2EDuration="41.229185596s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.767498799 +0000 UTC m=+870.356754369" lastFinishedPulling="2025-11-28 06:36:25.727280148 +0000 UTC m=+908.316535728" observedRunningTime="2025-11-28 06:36:27.222634807 +0000 UTC m=+909.811890387" watchObservedRunningTime="2025-11-28 06:36:27.229185596 +0000 UTC m=+909.818441166" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.266640 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-n7jmb" podStartSLOduration=3.339177501 podStartE2EDuration="41.266623487s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.767470338 +0000 UTC m=+870.356725908" lastFinishedPulling="2025-11-28 06:36:25.694916314 +0000 UTC m=+908.284171894" observedRunningTime="2025-11-28 06:36:27.256361091 +0000 UTC m=+909.845616661" watchObservedRunningTime="2025-11-28 06:36:27.266623487 +0000 UTC m=+909.855879057" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.470211 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-hwmcq" podStartSLOduration=3.309551716 podStartE2EDuration="41.470194474s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.716791385 +0000 UTC m=+870.306046955" lastFinishedPulling="2025-11-28 06:36:25.877434123 +0000 UTC m=+908.466689713" observedRunningTime="2025-11-28 06:36:27.469268297 +0000 UTC m=+910.058523877" watchObservedRunningTime="2025-11-28 06:36:27.470194474 +0000 UTC m=+910.059450044" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.530817 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" podStartSLOduration=4.726204394 podStartE2EDuration="42.530801534s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.775475259 +0000 UTC m=+870.364730829" lastFinishedPulling="2025-11-28 06:36:25.580072399 +0000 UTC m=+908.169327969" observedRunningTime="2025-11-28 06:36:27.529865857 +0000 UTC m=+910.119121447" watchObservedRunningTime="2025-11-28 06:36:27.530801534 +0000 UTC m=+910.120057104" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.531525 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-jtxkh" podStartSLOduration=3.938868634 podStartE2EDuration="42.531521634s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.252639615 +0000 UTC m=+869.841895185" lastFinishedPulling="2025-11-28 06:36:25.845292605 +0000 UTC m=+908.434548185" observedRunningTime="2025-11-28 06:36:27.498737398 +0000 UTC m=+910.087992988" watchObservedRunningTime="2025-11-28 06:36:27.531521634 +0000 UTC m=+910.120777204" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.591708 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-lgnvq" podStartSLOduration=4.552058147 podStartE2EDuration="42.591679171s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.687040886 +0000 UTC m=+870.276296456" lastFinishedPulling="2025-11-28 06:36:25.7266619 +0000 UTC m=+908.315917480" observedRunningTime="2025-11-28 06:36:27.578745538 +0000 UTC m=+910.168001128" watchObservedRunningTime="2025-11-28 06:36:27.591679171 +0000 UTC m=+910.180934751" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.655380 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-c7pkv" podStartSLOduration=4.23735523 podStartE2EDuration="42.655359958s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.419096501 +0000 UTC m=+870.008352071" lastFinishedPulling="2025-11-28 06:36:25.837101209 +0000 UTC m=+908.426356799" observedRunningTime="2025-11-28 06:36:27.651781305 +0000 UTC m=+910.241036895" watchObservedRunningTime="2025-11-28 06:36:27.655359958 +0000 UTC m=+910.244615538" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.684783 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-mwk52" podStartSLOduration=4.725829452 podStartE2EDuration="42.684762687s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.767497059 +0000 UTC m=+870.356752629" lastFinishedPulling="2025-11-28 06:36:25.726430274 +0000 UTC m=+908.315685864" observedRunningTime="2025-11-28 06:36:27.678822716 +0000 UTC m=+910.268078296" watchObservedRunningTime="2025-11-28 06:36:27.684762687 +0000 UTC m=+910.274018257" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.710997 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-mdrg9" podStartSLOduration=4.752356098 podStartE2EDuration="42.710981794s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.768022334 +0000 UTC m=+870.357277904" lastFinishedPulling="2025-11-28 06:36:25.72664802 +0000 UTC m=+908.315903600" observedRunningTime="2025-11-28 06:36:27.701977004 +0000 UTC m=+910.291232594" watchObservedRunningTime="2025-11-28 06:36:27.710981794 +0000 UTC m=+910.300237354" Nov 28 06:36:27 crc kubenswrapper[4955]: I1128 06:36:27.797715 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-f77r5" podStartSLOduration=3.825827088 podStartE2EDuration="41.797691687s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.75092552 +0000 UTC m=+870.340181090" lastFinishedPulling="2025-11-28 06:36:25.722790099 +0000 UTC m=+908.312045689" observedRunningTime="2025-11-28 06:36:27.750945178 +0000 UTC m=+910.340200768" watchObservedRunningTime="2025-11-28 06:36:27.797691687 +0000 UTC m=+910.386947257" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.103938 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" event={"ID":"0811185a-c49e-4a81-b6d7-c786f590177b","Type":"ContainerStarted","Data":"457a8921d09477bfd77f52195f7dd0d73bb83fc9311c658d2da8d88118add9a1"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.104842 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.106250 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" event={"ID":"245721bd-2bc5-4f42-ac45-5ae0b07cd77e","Type":"ContainerStarted","Data":"8d117fc8a8c052c8c7929cd23522d8987e8eddd847342b22e00e7eb1079034d2"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.106612 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.108582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" event={"ID":"3e51ea77-cbc1-4ebd-9247-335d93211353","Type":"ContainerStarted","Data":"7c6123627a15b914e8ba175925893868c28b553ca3852da597cbcf29e125ec63"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.111229 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerStarted","Data":"02e0cbb32404fe1195162fb101dc24fe20d6298b7784f1111007418b51619fce"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.112946 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" event={"ID":"cfa54a97-6210-4566-bf61-c0c7720ec0ec","Type":"ContainerStarted","Data":"f77d8a8bee6f10637a020e815854bac8f85801105c1c0a6b6ffba1b62f46659b"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.113282 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.114425 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" event={"ID":"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac","Type":"ContainerStarted","Data":"6038878fbac25b160f9f4d39bdbe6ab5e76bbd191c6cf31fd73b06786176c0e1"} Nov 28 06:36:28 crc kubenswrapper[4955]: E1128 06:36:28.115347 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.118036 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" event={"ID":"3f5477af-57e8-4a83-95ce-9fea4d62e797","Type":"ContainerStarted","Data":"48a9ad94de73d7e233a2c3f3e5e97c6bb7d0d7d7c13221bad35fa0f4833afbef"} Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.118059 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.118070 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:36:28 crc kubenswrapper[4955]: E1128 06:36:28.119068 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" podUID="a1c5873f-0d08-4f51-aa91-822fc86a33e3" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.126737 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" podStartSLOduration=4.291106741 podStartE2EDuration="42.126726406s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.81604469 +0000 UTC m=+870.405300260" lastFinishedPulling="2025-11-28 06:36:25.651664355 +0000 UTC m=+908.240919925" observedRunningTime="2025-11-28 06:36:28.124821931 +0000 UTC m=+910.714077511" watchObservedRunningTime="2025-11-28 06:36:28.126726406 +0000 UTC m=+910.715981976" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.185709 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" podStartSLOduration=3.447169628 podStartE2EDuration="43.185692448s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.767523489 +0000 UTC m=+870.356779069" lastFinishedPulling="2025-11-28 06:36:27.506046319 +0000 UTC m=+910.095301889" observedRunningTime="2025-11-28 06:36:28.18436263 +0000 UTC m=+910.773618210" watchObservedRunningTime="2025-11-28 06:36:28.185692448 +0000 UTC m=+910.774948018" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.188924 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtp9v" podStartSLOduration=7.646290434 podStartE2EDuration="32.188910361s" podCreationTimestamp="2025-11-28 06:35:56 +0000 UTC" firstStartedPulling="2025-11-28 06:36:03.094824054 +0000 UTC m=+885.684079624" lastFinishedPulling="2025-11-28 06:36:27.637443971 +0000 UTC m=+910.226699551" observedRunningTime="2025-11-28 06:36:28.156897587 +0000 UTC m=+910.746153167" watchObservedRunningTime="2025-11-28 06:36:28.188910361 +0000 UTC m=+910.778165931" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.207594 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-5kt75" podStartSLOduration=29.487243296 podStartE2EDuration="43.20756936s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.051396996 +0000 UTC m=+869.640652566" lastFinishedPulling="2025-11-28 06:36:00.77172306 +0000 UTC m=+883.360978630" observedRunningTime="2025-11-28 06:36:28.202079311 +0000 UTC m=+910.791334901" watchObservedRunningTime="2025-11-28 06:36:28.20756936 +0000 UTC m=+910.796824930" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.283904 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" podStartSLOduration=20.126544126 podStartE2EDuration="43.283887043s" podCreationTimestamp="2025-11-28 06:35:45 +0000 UTC" firstStartedPulling="2025-11-28 06:36:02.481689964 +0000 UTC m=+885.070945534" lastFinishedPulling="2025-11-28 06:36:25.639032881 +0000 UTC m=+908.228288451" observedRunningTime="2025-11-28 06:36:28.282354439 +0000 UTC m=+910.871610009" watchObservedRunningTime="2025-11-28 06:36:28.283887043 +0000 UTC m=+910.873142613" Nov 28 06:36:28 crc kubenswrapper[4955]: I1128 06:36:28.301357 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" podStartSLOduration=4.454319003 podStartE2EDuration="42.301322016s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.771204496 +0000 UTC m=+870.360460056" lastFinishedPulling="2025-11-28 06:36:25.618207499 +0000 UTC m=+908.207463069" observedRunningTime="2025-11-28 06:36:28.297034603 +0000 UTC m=+910.886290173" watchObservedRunningTime="2025-11-28 06:36:28.301322016 +0000 UTC m=+910.890577576" Nov 28 06:36:31 crc kubenswrapper[4955]: I1128 06:36:31.870769 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-rtv6s" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.309751 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.312378 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.328451 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.450883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcm9c\" (UniqueName: \"kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.451150 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.451393 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.553587 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcm9c\" (UniqueName: \"kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.553637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.553688 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.554214 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.554634 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.602323 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcm9c\" (UniqueName: \"kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c\") pod \"redhat-marketplace-s9fgt\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.643287 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:32 crc kubenswrapper[4955]: I1128 06:36:32.823946 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bd7f7485b-zbpwx" Nov 28 06:36:33 crc kubenswrapper[4955]: I1128 06:36:33.109183 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:33 crc kubenswrapper[4955]: I1128 06:36:33.177309 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerStarted","Data":"573b70da6c7d8e10929c75cd33dc5ee7beafaeba808888a7c0c6abe632117506"} Nov 28 06:36:34 crc kubenswrapper[4955]: I1128 06:36:34.185020 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerID="afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104" exitCode=0 Nov 28 06:36:34 crc kubenswrapper[4955]: I1128 06:36:34.185080 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerDied","Data":"afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104"} Nov 28 06:36:35 crc kubenswrapper[4955]: I1128 06:36:35.193476 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerID="fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc" exitCode=0 Nov 28 06:36:35 crc kubenswrapper[4955]: I1128 06:36:35.193580 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerDied","Data":"fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc"} Nov 28 06:36:36 crc kubenswrapper[4955]: I1128 06:36:36.202858 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerStarted","Data":"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5"} Nov 28 06:36:36 crc kubenswrapper[4955]: I1128 06:36:36.228119 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s9fgt" podStartSLOduration=2.813552864 podStartE2EDuration="4.22810045s" podCreationTimestamp="2025-11-28 06:36:32 +0000 UTC" firstStartedPulling="2025-11-28 06:36:34.186521433 +0000 UTC m=+916.775777013" lastFinishedPulling="2025-11-28 06:36:35.601069029 +0000 UTC m=+918.190324599" observedRunningTime="2025-11-28 06:36:36.222045275 +0000 UTC m=+918.811300885" watchObservedRunningTime="2025-11-28 06:36:36.22810045 +0000 UTC m=+918.817356030" Nov 28 06:36:36 crc kubenswrapper[4955]: I1128 06:36:36.288963 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-4pmmp" Nov 28 06:36:36 crc kubenswrapper[4955]: I1128 06:36:36.568201 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-fz6rt" Nov 28 06:36:36 crc kubenswrapper[4955]: I1128 06:36:36.858018 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-q4wgd" Nov 28 06:36:37 crc kubenswrapper[4955]: I1128 06:36:37.088603 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:37 crc kubenswrapper[4955]: I1128 06:36:37.088852 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:37 crc kubenswrapper[4955]: I1128 06:36:37.123591 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-mmqbs" Nov 28 06:36:37 crc kubenswrapper[4955]: I1128 06:36:37.144432 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:37 crc kubenswrapper[4955]: I1128 06:36:37.256395 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:38 crc kubenswrapper[4955]: I1128 06:36:38.677881 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:36:38 crc kubenswrapper[4955]: E1128 06:36:38.708081 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\"" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podUID="e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac" Nov 28 06:36:39 crc kubenswrapper[4955]: I1128 06:36:39.226264 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtp9v" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="registry-server" containerID="cri-o://02e0cbb32404fe1195162fb101dc24fe20d6298b7784f1111007418b51619fce" gracePeriod=2 Nov 28 06:36:40 crc kubenswrapper[4955]: I1128 06:36:40.234002 4955 generic.go:334] "Generic (PLEG): container finished" podID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerID="02e0cbb32404fe1195162fb101dc24fe20d6298b7784f1111007418b51619fce" exitCode=0 Nov 28 06:36:40 crc kubenswrapper[4955]: I1128 06:36:40.234048 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerDied","Data":"02e0cbb32404fe1195162fb101dc24fe20d6298b7784f1111007418b51619fce"} Nov 28 06:36:40 crc kubenswrapper[4955]: E1128 06:36:40.705750 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:225958f250a1075b69439d776a13acc45c78695c21abda23600fb53ca1640423\\\"\"" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podUID="5d9f654a-a223-4b91-93fd-301807c6f29a" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.003471 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.099250 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content\") pod \"a061ef17-3f7b-4a7d-8684-af18c9319228\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.099319 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities\") pod \"a061ef17-3f7b-4a7d-8684-af18c9319228\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.099392 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb982\" (UniqueName: \"kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982\") pod \"a061ef17-3f7b-4a7d-8684-af18c9319228\" (UID: \"a061ef17-3f7b-4a7d-8684-af18c9319228\") " Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.100998 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities" (OuterVolumeSpecName: "utilities") pod "a061ef17-3f7b-4a7d-8684-af18c9319228" (UID: "a061ef17-3f7b-4a7d-8684-af18c9319228"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.105085 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982" (OuterVolumeSpecName: "kube-api-access-tb982") pod "a061ef17-3f7b-4a7d-8684-af18c9319228" (UID: "a061ef17-3f7b-4a7d-8684-af18c9319228"). InnerVolumeSpecName "kube-api-access-tb982". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.143582 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a061ef17-3f7b-4a7d-8684-af18c9319228" (UID: "a061ef17-3f7b-4a7d-8684-af18c9319228"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.202386 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.202454 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a061ef17-3f7b-4a7d-8684-af18c9319228-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.202478 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb982\" (UniqueName: \"kubernetes.io/projected/a061ef17-3f7b-4a7d-8684-af18c9319228-kube-api-access-tb982\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.254696 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtp9v" event={"ID":"a061ef17-3f7b-4a7d-8684-af18c9319228","Type":"ContainerDied","Data":"b9c97b306cbb54d7bc5ddc5367ed7d2210b76dbc5a13e9930fc3da32734f784c"} Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.254752 4955 scope.go:117] "RemoveContainer" containerID="02e0cbb32404fe1195162fb101dc24fe20d6298b7784f1111007418b51619fce" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.254810 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtp9v" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.275217 4955 scope.go:117] "RemoveContainer" containerID="a4279158e450746f3702bf9f42015a6c2f0bb38d324b705611757ff51e714fa3" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.294563 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.298991 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtp9v"] Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.305782 4955 scope.go:117] "RemoveContainer" containerID="0c412c329667d12003864e312b3e8da7a10042a57cf129944191914b00886bda" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.643664 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.643725 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:42 crc kubenswrapper[4955]: I1128 06:36:42.690077 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:42 crc kubenswrapper[4955]: E1128 06:36:42.705654 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podUID="f0d92863-0f89-415d-b4a3-24e09fb4ec02" Nov 28 06:36:43 crc kubenswrapper[4955]: I1128 06:36:43.330338 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:43 crc kubenswrapper[4955]: I1128 06:36:43.712076 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" path="/var/lib/kubelet/pods/a061ef17-3f7b-4a7d-8684-af18c9319228/volumes" Nov 28 06:36:44 crc kubenswrapper[4955]: I1128 06:36:44.299467 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" event={"ID":"a1c5873f-0d08-4f51-aa91-822fc86a33e3","Type":"ContainerStarted","Data":"6e4976fd094eaf42cfe2fb83c1acfcc3eec9fcc38d6adaa008a9a60ae27204ee"} Nov 28 06:36:44 crc kubenswrapper[4955]: I1128 06:36:44.350061 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" podStartSLOduration=18.65380469 podStartE2EDuration="58.350026078s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:36:03.513928133 +0000 UTC m=+886.103183703" lastFinishedPulling="2025-11-28 06:36:43.210149511 +0000 UTC m=+925.799405091" observedRunningTime="2025-11-28 06:36:44.341765589 +0000 UTC m=+926.931021199" watchObservedRunningTime="2025-11-28 06:36:44.350026078 +0000 UTC m=+926.939281698" Nov 28 06:36:45 crc kubenswrapper[4955]: I1128 06:36:45.075692 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:45 crc kubenswrapper[4955]: I1128 06:36:45.307767 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s9fgt" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="registry-server" containerID="cri-o://d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5" gracePeriod=2 Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.258708 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.316861 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerID="d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5" exitCode=0 Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.316906 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerDied","Data":"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5"} Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.316919 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s9fgt" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.316943 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s9fgt" event={"ID":"7c85c451-3ca7-4579-aef9-a4bf0a8040b4","Type":"ContainerDied","Data":"573b70da6c7d8e10929c75cd33dc5ee7beafaeba808888a7c0c6abe632117506"} Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.316962 4955 scope.go:117] "RemoveContainer" containerID="d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.333327 4955 scope.go:117] "RemoveContainer" containerID="fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.353933 4955 scope.go:117] "RemoveContainer" containerID="afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.359785 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities\") pod \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.359944 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content\") pod \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.359988 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcm9c\" (UniqueName: \"kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c\") pod \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\" (UID: \"7c85c451-3ca7-4579-aef9-a4bf0a8040b4\") " Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.360877 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities" (OuterVolumeSpecName: "utilities") pod "7c85c451-3ca7-4579-aef9-a4bf0a8040b4" (UID: "7c85c451-3ca7-4579-aef9-a4bf0a8040b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.366319 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c" (OuterVolumeSpecName: "kube-api-access-bcm9c") pod "7c85c451-3ca7-4579-aef9-a4bf0a8040b4" (UID: "7c85c451-3ca7-4579-aef9-a4bf0a8040b4"). InnerVolumeSpecName "kube-api-access-bcm9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.376900 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c85c451-3ca7-4579-aef9-a4bf0a8040b4" (UID: "7c85c451-3ca7-4579-aef9-a4bf0a8040b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.398299 4955 scope.go:117] "RemoveContainer" containerID="d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5" Nov 28 06:36:46 crc kubenswrapper[4955]: E1128 06:36:46.398797 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5\": container with ID starting with d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5 not found: ID does not exist" containerID="d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.398845 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5"} err="failed to get container status \"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5\": rpc error: code = NotFound desc = could not find container \"d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5\": container with ID starting with d23ad08f36846306fef775b99d044198504f30972542fc2a3d772bf7fa983ee5 not found: ID does not exist" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.398872 4955 scope.go:117] "RemoveContainer" containerID="fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc" Nov 28 06:36:46 crc kubenswrapper[4955]: E1128 06:36:46.399235 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc\": container with ID starting with fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc not found: ID does not exist" containerID="fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.399277 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc"} err="failed to get container status \"fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc\": rpc error: code = NotFound desc = could not find container \"fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc\": container with ID starting with fa7da8f782a860cec6f51dc1801bf98731f26bb53ba775803f4d40138b6339bc not found: ID does not exist" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.399305 4955 scope.go:117] "RemoveContainer" containerID="afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104" Nov 28 06:36:46 crc kubenswrapper[4955]: E1128 06:36:46.399623 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104\": container with ID starting with afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104 not found: ID does not exist" containerID="afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.399662 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104"} err="failed to get container status \"afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104\": rpc error: code = NotFound desc = could not find container \"afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104\": container with ID starting with afc5b70f4d1b462b8d683231b3eaf63285509b8bef1369302e44490922837104 not found: ID does not exist" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.461864 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.461898 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcm9c\" (UniqueName: \"kubernetes.io/projected/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-kube-api-access-bcm9c\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.461907 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c85c451-3ca7-4579-aef9-a4bf0a8040b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.676816 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:46 crc kubenswrapper[4955]: I1128 06:36:46.689554 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s9fgt"] Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.484317 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485332 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="extract-utilities" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485358 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="extract-utilities" Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485400 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485415 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485442 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="extract-content" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485456 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="extract-content" Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485536 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="extract-content" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485550 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="extract-content" Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485580 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="extract-utilities" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485593 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="extract-utilities" Nov 28 06:36:47 crc kubenswrapper[4955]: E1128 06:36:47.485618 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485633 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485894 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a061ef17-3f7b-4a7d-8684-af18c9319228" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.485916 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" containerName="registry-server" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.487853 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.497122 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.577637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.577696 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.577746 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzljm\" (UniqueName: \"kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.679405 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.679671 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.679816 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzljm\" (UniqueName: \"kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.679857 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.680173 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.698998 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzljm\" (UniqueName: \"kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm\") pod \"redhat-operators-zdvpd\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.713714 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c85c451-3ca7-4579-aef9-a4bf0a8040b4" path="/var/lib/kubelet/pods/7c85c451-3ca7-4579-aef9-a4bf0a8040b4/volumes" Nov 28 06:36:47 crc kubenswrapper[4955]: I1128 06:36:47.805559 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:48 crc kubenswrapper[4955]: I1128 06:36:48.274845 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:36:48 crc kubenswrapper[4955]: W1128 06:36:48.283279 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe99da91_d8e5_4d96_b466_5069fe22b26b.slice/crio-a2b9381e645835d1bdf265618cec39341d19511417c6fe7986e2bf398e51e4b6 WatchSource:0}: Error finding container a2b9381e645835d1bdf265618cec39341d19511417c6fe7986e2bf398e51e4b6: Status 404 returned error can't find the container with id a2b9381e645835d1bdf265618cec39341d19511417c6fe7986e2bf398e51e4b6 Nov 28 06:36:48 crc kubenswrapper[4955]: I1128 06:36:48.336806 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerStarted","Data":"a2b9381e645835d1bdf265618cec39341d19511417c6fe7986e2bf398e51e4b6"} Nov 28 06:36:49 crc kubenswrapper[4955]: I1128 06:36:49.348146 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerID="c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a" exitCode=0 Nov 28 06:36:49 crc kubenswrapper[4955]: I1128 06:36:49.348216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerDied","Data":"c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a"} Nov 28 06:36:51 crc kubenswrapper[4955]: I1128 06:36:51.366075 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerID="e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6" exitCode=0 Nov 28 06:36:51 crc kubenswrapper[4955]: I1128 06:36:51.366161 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerDied","Data":"e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6"} Nov 28 06:36:52 crc kubenswrapper[4955]: I1128 06:36:52.358114 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:36:52 crc kubenswrapper[4955]: I1128 06:36:52.365403 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6" Nov 28 06:36:52 crc kubenswrapper[4955]: I1128 06:36:52.376410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerStarted","Data":"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6"} Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.386576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" event={"ID":"e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac","Type":"ContainerStarted","Data":"d240afa4ba878dd4363b16f6f23e99f273bfb8a2ee234fd033c7084b26f175b4"} Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.387072 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.392624 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.392694 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.392750 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.393633 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.393733 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9" gracePeriod=600 Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.405391 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" podStartSLOduration=2.840407761 podStartE2EDuration="1m7.405372281s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.775617543 +0000 UTC m=+870.364873113" lastFinishedPulling="2025-11-28 06:36:52.340582063 +0000 UTC m=+934.929837633" observedRunningTime="2025-11-28 06:36:53.403030843 +0000 UTC m=+935.992286433" watchObservedRunningTime="2025-11-28 06:36:53.405372281 +0000 UTC m=+935.994627851" Nov 28 06:36:53 crc kubenswrapper[4955]: I1128 06:36:53.408483 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdvpd" podStartSLOduration=3.9468403260000002 podStartE2EDuration="6.40847384s" podCreationTimestamp="2025-11-28 06:36:47 +0000 UTC" firstStartedPulling="2025-11-28 06:36:49.353127639 +0000 UTC m=+931.942383209" lastFinishedPulling="2025-11-28 06:36:51.814761153 +0000 UTC m=+934.404016723" observedRunningTime="2025-11-28 06:36:52.449804746 +0000 UTC m=+935.039060326" watchObservedRunningTime="2025-11-28 06:36:53.40847384 +0000 UTC m=+935.997729410" Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.397614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" event={"ID":"5d9f654a-a223-4b91-93fd-301807c6f29a","Type":"ContainerStarted","Data":"177653a9637848e8c22a97ebaf640b73aa47e183d65a33b0e9eaf67971156f30"} Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.398089 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.402188 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9" exitCode=0 Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.402237 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9"} Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.403203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9"} Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.403223 4955 scope.go:117] "RemoveContainer" containerID="a48f5c76d873d06051ccff10b32bc473afff507589be9330f056de9d4b7137d0" Nov 28 06:36:54 crc kubenswrapper[4955]: I1128 06:36:54.415761 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" podStartSLOduration=2.029873903 podStartE2EDuration="1m8.415746349s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.775595722 +0000 UTC m=+870.364851292" lastFinishedPulling="2025-11-28 06:36:54.161468168 +0000 UTC m=+936.750723738" observedRunningTime="2025-11-28 06:36:54.414946546 +0000 UTC m=+937.004202126" watchObservedRunningTime="2025-11-28 06:36:54.415746349 +0000 UTC m=+937.005001919" Nov 28 06:36:57 crc kubenswrapper[4955]: I1128 06:36:57.101895 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-4vhhh" Nov 28 06:36:57 crc kubenswrapper[4955]: I1128 06:36:57.806299 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:57 crc kubenswrapper[4955]: I1128 06:36:57.806758 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:36:58 crc kubenswrapper[4955]: I1128 06:36:58.443078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" event={"ID":"f0d92863-0f89-415d-b4a3-24e09fb4ec02","Type":"ContainerStarted","Data":"e46328399088983b0cc203c9dabd78e1cba9425419078dd41901b6aae187d370"} Nov 28 06:36:58 crc kubenswrapper[4955]: I1128 06:36:58.443732 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:36:58 crc kubenswrapper[4955]: I1128 06:36:58.872229 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdvpd" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="registry-server" probeResult="failure" output=< Nov 28 06:36:58 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 06:36:58 crc kubenswrapper[4955]: > Nov 28 06:37:06 crc kubenswrapper[4955]: I1128 06:37:06.710824 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" Nov 28 06:37:06 crc kubenswrapper[4955]: I1128 06:37:06.734823 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-qtcjk" podStartSLOduration=10.299476373 podStartE2EDuration="1m20.734792501s" podCreationTimestamp="2025-11-28 06:35:46 +0000 UTC" firstStartedPulling="2025-11-28 06:35:47.780583256 +0000 UTC m=+870.369838826" lastFinishedPulling="2025-11-28 06:36:58.215899384 +0000 UTC m=+940.805154954" observedRunningTime="2025-11-28 06:36:58.464737667 +0000 UTC m=+941.053993267" watchObservedRunningTime="2025-11-28 06:37:06.734792501 +0000 UTC m=+949.324048111" Nov 28 06:37:06 crc kubenswrapper[4955]: I1128 06:37:06.823382 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-4csc4" Nov 28 06:37:07 crc kubenswrapper[4955]: I1128 06:37:07.874006 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:37:07 crc kubenswrapper[4955]: I1128 06:37:07.924874 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:37:08 crc kubenswrapper[4955]: I1128 06:37:08.105882 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:37:09 crc kubenswrapper[4955]: I1128 06:37:09.532094 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdvpd" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="registry-server" containerID="cri-o://f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6" gracePeriod=2 Nov 28 06:37:09 crc kubenswrapper[4955]: I1128 06:37:09.969576 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.032635 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities\") pod \"fe99da91-d8e5-4d96-b466-5069fe22b26b\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.032708 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzljm\" (UniqueName: \"kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm\") pod \"fe99da91-d8e5-4d96-b466-5069fe22b26b\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.032787 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content\") pod \"fe99da91-d8e5-4d96-b466-5069fe22b26b\" (UID: \"fe99da91-d8e5-4d96-b466-5069fe22b26b\") " Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.033838 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities" (OuterVolumeSpecName: "utilities") pod "fe99da91-d8e5-4d96-b466-5069fe22b26b" (UID: "fe99da91-d8e5-4d96-b466-5069fe22b26b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.039897 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm" (OuterVolumeSpecName: "kube-api-access-tzljm") pod "fe99da91-d8e5-4d96-b466-5069fe22b26b" (UID: "fe99da91-d8e5-4d96-b466-5069fe22b26b"). InnerVolumeSpecName "kube-api-access-tzljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.134828 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.134871 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzljm\" (UniqueName: \"kubernetes.io/projected/fe99da91-d8e5-4d96-b466-5069fe22b26b-kube-api-access-tzljm\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.135901 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe99da91-d8e5-4d96-b466-5069fe22b26b" (UID: "fe99da91-d8e5-4d96-b466-5069fe22b26b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.236285 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe99da91-d8e5-4d96-b466-5069fe22b26b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.545407 4955 generic.go:334] "Generic (PLEG): container finished" podID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerID="f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6" exitCode=0 Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.545548 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdvpd" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.545606 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerDied","Data":"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6"} Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.547364 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdvpd" event={"ID":"fe99da91-d8e5-4d96-b466-5069fe22b26b","Type":"ContainerDied","Data":"a2b9381e645835d1bdf265618cec39341d19511417c6fe7986e2bf398e51e4b6"} Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.547403 4955 scope.go:117] "RemoveContainer" containerID="f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.586771 4955 scope.go:117] "RemoveContainer" containerID="e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.610633 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.618407 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdvpd"] Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.628818 4955 scope.go:117] "RemoveContainer" containerID="c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.667861 4955 scope.go:117] "RemoveContainer" containerID="f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6" Nov 28 06:37:10 crc kubenswrapper[4955]: E1128 06:37:10.668836 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6\": container with ID starting with f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6 not found: ID does not exist" containerID="f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.668885 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6"} err="failed to get container status \"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6\": rpc error: code = NotFound desc = could not find container \"f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6\": container with ID starting with f27d3ab1bf06de4258c50c6937c051804a943d74431c3017928103fa3782edd6 not found: ID does not exist" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.668919 4955 scope.go:117] "RemoveContainer" containerID="e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6" Nov 28 06:37:10 crc kubenswrapper[4955]: E1128 06:37:10.669349 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6\": container with ID starting with e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6 not found: ID does not exist" containerID="e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.669387 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6"} err="failed to get container status \"e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6\": rpc error: code = NotFound desc = could not find container \"e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6\": container with ID starting with e8a8bf683ae253d14881d4c6362e12afe1af5f847dacd99ce50d9905bfc448b6 not found: ID does not exist" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.669416 4955 scope.go:117] "RemoveContainer" containerID="c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a" Nov 28 06:37:10 crc kubenswrapper[4955]: E1128 06:37:10.670133 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a\": container with ID starting with c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a not found: ID does not exist" containerID="c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a" Nov 28 06:37:10 crc kubenswrapper[4955]: I1128 06:37:10.670171 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a"} err="failed to get container status \"c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a\": rpc error: code = NotFound desc = could not find container \"c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a\": container with ID starting with c54b1611425e09f8b461e5b52272b95bece2f842d272a37de755035f7b854a8a not found: ID does not exist" Nov 28 06:37:11 crc kubenswrapper[4955]: I1128 06:37:11.723165 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" path="/var/lib/kubelet/pods/fe99da91-d8e5-4d96-b466-5069fe22b26b/volumes" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.879132 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:22 crc kubenswrapper[4955]: E1128 06:37:22.879888 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="registry-server" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.879901 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="registry-server" Nov 28 06:37:22 crc kubenswrapper[4955]: E1128 06:37:22.879930 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="extract-content" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.879936 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="extract-content" Nov 28 06:37:22 crc kubenswrapper[4955]: E1128 06:37:22.879953 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="extract-utilities" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.879960 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="extract-utilities" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.880072 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe99da91-d8e5-4d96-b466-5069fe22b26b" containerName="registry-server" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.880828 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.884643 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.884892 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-mp7bf" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.884929 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.885741 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.900681 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.921239 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.921340 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8dl\" (UniqueName: \"kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.958157 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.959249 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.961489 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 06:37:22 crc kubenswrapper[4955]: I1128 06:37:22.976336 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.026351 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.026456 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.026520 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.026605 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8dl\" (UniqueName: \"kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.026649 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vfk\" (UniqueName: \"kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.027142 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.052052 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8dl\" (UniqueName: \"kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl\") pod \"dnsmasq-dns-675f4bcbfc-kw878\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.128059 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.128111 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.128146 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vfk\" (UniqueName: \"kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.128909 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.129024 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.144298 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vfk\" (UniqueName: \"kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk\") pod \"dnsmasq-dns-78dd6ddcc-cggpm\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.207333 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.280368 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.652077 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.659947 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:37:23 crc kubenswrapper[4955]: I1128 06:37:23.780744 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:23 crc kubenswrapper[4955]: W1128 06:37:23.780810 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1507653_41f3_4aa6_a6db_55f6c67abbd9.slice/crio-d2db88467e127985bb995d376774f8b946224d2f99ed7a83619ae1e33853fefd WatchSource:0}: Error finding container d2db88467e127985bb995d376774f8b946224d2f99ed7a83619ae1e33853fefd: Status 404 returned error can't find the container with id d2db88467e127985bb995d376774f8b946224d2f99ed7a83619ae1e33853fefd Nov 28 06:37:24 crc kubenswrapper[4955]: I1128 06:37:24.669437 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" event={"ID":"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5","Type":"ContainerStarted","Data":"bfa4d792d71c5fdce03adc19123f47491732df1078036b6db37f7cce893d90f2"} Nov 28 06:37:24 crc kubenswrapper[4955]: I1128 06:37:24.671439 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" event={"ID":"a1507653-41f3-4aa6-a6db-55f6c67abbd9","Type":"ContainerStarted","Data":"d2db88467e127985bb995d376774f8b946224d2f99ed7a83619ae1e33853fefd"} Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.011471 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.036347 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.037462 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.054171 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.175522 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.175578 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.175647 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7zt6\" (UniqueName: \"kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.277399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.277439 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.277471 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7zt6\" (UniqueName: \"kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.278473 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.278999 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.313920 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7zt6\" (UniqueName: \"kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6\") pod \"dnsmasq-dns-666b6646f7-fffq5\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.321336 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.342061 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.345053 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.354750 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.359592 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.481165 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.481296 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.481559 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7llk\" (UniqueName: \"kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.582611 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7llk\" (UniqueName: \"kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.582691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.582739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.583738 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.585095 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.602995 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7llk\" (UniqueName: \"kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk\") pod \"dnsmasq-dns-57d769cc4f-7tl74\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.679080 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:26 crc kubenswrapper[4955]: I1128 06:37:26.853414 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:26 crc kubenswrapper[4955]: W1128 06:37:26.861609 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod933a971c_fc88_4fd7_8a37_fe34d58f9963.slice/crio-e4d14c0c24569e563f86979e5a4bba3e96c045382e1b6ff7437feea799b9ea2a WatchSource:0}: Error finding container e4d14c0c24569e563f86979e5a4bba3e96c045382e1b6ff7437feea799b9ea2a: Status 404 returned error can't find the container with id e4d14c0c24569e563f86979e5a4bba3e96c045382e1b6ff7437feea799b9ea2a Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.103940 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:37:27 crc kubenswrapper[4955]: W1128 06:37:27.111032 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf42cfc11_a12d_415e_8d9b_c4bcdde8e457.slice/crio-199170d4c0558f095717a642f2a1ca0c173c3a7c5a00d0d8439f35bee7448b73 WatchSource:0}: Error finding container 199170d4c0558f095717a642f2a1ca0c173c3a7c5a00d0d8439f35bee7448b73: Status 404 returned error can't find the container with id 199170d4c0558f095717a642f2a1ca0c173c3a7c5a00d0d8439f35bee7448b73 Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.240484 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.242666 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247118 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9x7gk" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247282 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247412 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247717 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247744 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247891 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.247991 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.256360 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.300211 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.300287 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301236 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301379 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301413 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9bvd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301448 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301481 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301576 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301736 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.301762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403554 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403687 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403722 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403757 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.403956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.404002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.404065 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.404119 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9bvd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.404176 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.404744 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.405533 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.405833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.406030 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.407672 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.408891 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.409288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.409417 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.411881 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.420741 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.420820 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9bvd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.441425 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.526357 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.528269 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533414 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533478 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533615 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533728 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533765 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.533893 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vhjwn" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.535101 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.545491 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.597815 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.606786 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.606892 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bnf\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.606929 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.606987 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607014 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607099 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607268 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607316 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607400 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607480 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.607532 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708638 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708689 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708731 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708771 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708790 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708814 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708853 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708876 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708915 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708940 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8bnf\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.708965 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.710083 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.711297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.711640 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.712226 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.712475 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.713025 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.714210 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.720003 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.728460 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.736146 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8bnf\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.750828 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.756363 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" event={"ID":"933a971c-fc88-4fd7-8a37-fe34d58f9963","Type":"ContainerStarted","Data":"e4d14c0c24569e563f86979e5a4bba3e96c045382e1b6ff7437feea799b9ea2a"} Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.758987 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" event={"ID":"f42cfc11-a12d-415e-8d9b-c4bcdde8e457","Type":"ContainerStarted","Data":"199170d4c0558f095717a642f2a1ca0c173c3a7c5a00d0d8439f35bee7448b73"} Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.759321 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:27 crc kubenswrapper[4955]: I1128 06:37:27.895643 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:37:28 crc kubenswrapper[4955]: I1128 06:37:28.994714 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.005990 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.006118 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.012461 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.012706 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.012857 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rlrq9" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.015068 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.018317 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056520 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056621 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056660 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056697 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056748 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056773 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xhh\" (UniqueName: \"kubernetes.io/projected/da36284b-b2a1-4008-a19c-3916e99c0bec-kube-api-access-w8xhh\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056806 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-default\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.056874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-kolla-config\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158736 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158807 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158835 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8xhh\" (UniqueName: \"kubernetes.io/projected/da36284b-b2a1-4008-a19c-3916e99c0bec-kube-api-access-w8xhh\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158868 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-default\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158920 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-kolla-config\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.158973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.159018 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.159027 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.159271 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.159600 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-generated\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.160393 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-config-data-default\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.161086 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-kolla-config\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.162376 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da36284b-b2a1-4008-a19c-3916e99c0bec-operator-scripts\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.165649 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.184112 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8xhh\" (UniqueName: \"kubernetes.io/projected/da36284b-b2a1-4008-a19c-3916e99c0bec-kube-api-access-w8xhh\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.192309 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da36284b-b2a1-4008-a19c-3916e99c0bec-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.213669 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"da36284b-b2a1-4008-a19c-3916e99c0bec\") " pod="openstack/openstack-galera-0" Nov 28 06:37:29 crc kubenswrapper[4955]: I1128 06:37:29.376918 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.395099 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.397325 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.441262 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.441338 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.442080 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-4dnww" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.442220 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.449303 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480650 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480715 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480776 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480799 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480829 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480861 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480879 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6khcn\" (UniqueName: \"kubernetes.io/projected/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kube-api-access-6khcn\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.480903 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.581878 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.581932 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.581975 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582022 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582045 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582065 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582095 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6khcn\" (UniqueName: \"kubernetes.io/projected/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kube-api-access-6khcn\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582313 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.582973 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.583457 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.583749 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.585733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.588340 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.588405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.600628 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6khcn\" (UniqueName: \"kubernetes.io/projected/5ce3cc8f-9d19-49fa-83a9-d71cf669d26c-kube-api-access-6khcn\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.615654 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c\") " pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.717869 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.722680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.726813 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-wxgdn" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.727054 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.727368 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.736289 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.768136 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.785770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-config-data\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.785850 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.785970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kolla-config\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.785989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2t2l\" (UniqueName: \"kubernetes.io/projected/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kube-api-access-z2t2l\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.786022 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.887196 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kolla-config\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.887568 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2t2l\" (UniqueName: \"kubernetes.io/projected/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kube-api-access-z2t2l\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.887601 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.887632 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-config-data\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.887678 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.888433 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-config-data\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.888959 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kolla-config\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.891970 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.907202 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2t2l\" (UniqueName: \"kubernetes.io/projected/b8f1a214-823b-4a75-ada2-b5973ad7abd6-kube-api-access-z2t2l\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:30 crc kubenswrapper[4955]: I1128 06:37:30.907373 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f1a214-823b-4a75-ada2-b5973ad7abd6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b8f1a214-823b-4a75-ada2-b5973ad7abd6\") " pod="openstack/memcached-0" Nov 28 06:37:31 crc kubenswrapper[4955]: I1128 06:37:31.053656 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.542913 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.545750 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.549569 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-jq4kf" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.582124 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.609999 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhck\" (UniqueName: \"kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck\") pod \"kube-state-metrics-0\" (UID: \"49e9a1c0-0f8a-43ad-8180-6ecb191c5850\") " pod="openstack/kube-state-metrics-0" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.710835 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vhck\" (UniqueName: \"kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck\") pod \"kube-state-metrics-0\" (UID: \"49e9a1c0-0f8a-43ad-8180-6ecb191c5850\") " pod="openstack/kube-state-metrics-0" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.748595 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vhck\" (UniqueName: \"kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck\") pod \"kube-state-metrics-0\" (UID: \"49e9a1c0-0f8a-43ad-8180-6ecb191c5850\") " pod="openstack/kube-state-metrics-0" Nov 28 06:37:32 crc kubenswrapper[4955]: I1128 06:37:32.882593 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.974736 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p2bvh"] Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.975940 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983330 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983378 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983414 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-log-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983435 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-combined-ca-bundle\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983462 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c547f\" (UniqueName: \"kubernetes.io/projected/3963971f-dccf-42a8-9889-b5e122ee6809-kube-api-access-c547f\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983492 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3963971f-dccf-42a8-9889-b5e122ee6809-scripts\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.983545 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-ovn-controller-tls-certs\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:36.987631 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p2bvh"] Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.020006 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.020470 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-twzsg" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.020662 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.052764 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-nxwbb"] Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.071923 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.080468 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nxwbb"] Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084558 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c547f\" (UniqueName: \"kubernetes.io/projected/3963971f-dccf-42a8-9889-b5e122ee6809-kube-api-access-c547f\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084607 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3963971f-dccf-42a8-9889-b5e122ee6809-scripts\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084643 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-ovn-controller-tls-certs\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084709 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084734 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084759 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-log-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.084772 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-combined-ca-bundle\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.094277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.095155 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-run\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.095446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-combined-ca-bundle\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.095550 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3963971f-dccf-42a8-9889-b5e122ee6809-var-log-ovn\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.096327 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3963971f-dccf-42a8-9889-b5e122ee6809-scripts\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.109135 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c547f\" (UniqueName: \"kubernetes.io/projected/3963971f-dccf-42a8-9889-b5e122ee6809-kube-api-access-c547f\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.110752 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3963971f-dccf-42a8-9889-b5e122ee6809-ovn-controller-tls-certs\") pod \"ovn-controller-p2bvh\" (UID: \"3963971f-dccf-42a8-9889-b5e122ee6809\") " pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.185966 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-run\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.186018 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c1a8276-d93e-498f-94a2-e698b071f1ee-scripts\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.186037 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-lib\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.186061 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9f9h\" (UniqueName: \"kubernetes.io/projected/7c1a8276-d93e-498f-94a2-e698b071f1ee-kube-api-access-l9f9h\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.186150 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-log\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.186172 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-etc-ovs\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.287544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-log\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.287616 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-etc-ovs\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.287942 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-run\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.287964 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-etc-ovs\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288022 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-run\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288064 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-log\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288102 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c1a8276-d93e-498f-94a2-e698b071f1ee-scripts\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288188 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-lib\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288223 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9f9h\" (UniqueName: \"kubernetes.io/projected/7c1a8276-d93e-498f-94a2-e698b071f1ee-kube-api-access-l9f9h\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.288332 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/7c1a8276-d93e-498f-94a2-e698b071f1ee-var-lib\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.289970 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c1a8276-d93e-498f-94a2-e698b071f1ee-scripts\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.304978 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9f9h\" (UniqueName: \"kubernetes.io/projected/7c1a8276-d93e-498f-94a2-e698b071f1ee-kube-api-access-l9f9h\") pod \"ovn-controller-ovs-nxwbb\" (UID: \"7c1a8276-d93e-498f-94a2-e698b071f1ee\") " pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.346211 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:37 crc kubenswrapper[4955]: I1128 06:37:37.452927 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.518853 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.520369 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.528037 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.528398 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.528548 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6b74z" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.528560 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.528856 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.544213 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630260 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630323 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6f4x\" (UniqueName: \"kubernetes.io/projected/184e43b4-9c7c-4df1-b1a7-503ef8139459-kube-api-access-t6f4x\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630361 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630381 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630433 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-config\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630476 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.630553 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732396 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732541 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732589 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-config\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.732836 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.733574 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-config\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.733596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.733773 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/184e43b4-9c7c-4df1-b1a7-503ef8139459-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.733779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.733901 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6f4x\" (UniqueName: \"kubernetes.io/projected/184e43b4-9c7c-4df1-b1a7-503ef8139459-kube-api-access-t6f4x\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.734235 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.750895 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.751767 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.754217 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/184e43b4-9c7c-4df1-b1a7-503ef8139459-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.759596 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.760827 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6f4x\" (UniqueName: \"kubernetes.io/projected/184e43b4-9c7c-4df1-b1a7-503ef8139459-kube-api-access-t6f4x\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.761437 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.769379 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.769456 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xls62" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.769743 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.769842 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.771409 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.793701 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"184e43b4-9c7c-4df1-b1a7-503ef8139459\") " pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.858175 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936530 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936670 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936707 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936756 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqmq\" (UniqueName: \"kubernetes.io/projected/f1ba1430-28cb-4bba-936d-00e8988eab09-kube-api-access-smqmq\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936793 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936835 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:39 crc kubenswrapper[4955]: I1128 06:37:39.936923 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039369 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039486 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039634 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039661 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqmq\" (UniqueName: \"kubernetes.io/projected/f1ba1430-28cb-4bba-936d-00e8988eab09-kube-api-access-smqmq\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039689 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039716 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.039766 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.040021 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.043665 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.045246 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-config\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.046757 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.047685 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.048807 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f1ba1430-28cb-4bba-936d-00e8988eab09-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.050846 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1ba1430-28cb-4bba-936d-00e8988eab09-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.063571 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqmq\" (UniqueName: \"kubernetes.io/projected/f1ba1430-28cb-4bba-936d-00e8988eab09-kube-api-access-smqmq\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.073647 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f1ba1430-28cb-4bba-936d-00e8988eab09\") " pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: I1128 06:37:40.148467 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.408304 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.408596 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rd8dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kw878_openstack(3f606d6d-b0c3-48e4-90cc-c1500fcdfba5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.409795 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" podUID="3f606d6d-b0c3-48e4-90cc-c1500fcdfba5" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.437434 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.437583 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6vfk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-cggpm_openstack(a1507653-41f3-4aa6-a6db-55f6c67abbd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:37:40 crc kubenswrapper[4955]: E1128 06:37:40.438801 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" podUID="a1507653-41f3-4aa6-a6db-55f6c67abbd9" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.088912 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.122084 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.219387 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.333646 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.339449 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.464762 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: W1128 06:37:41.470818 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8f1a214_823b_4a75_ada2_b5973ad7abd6.slice/crio-86175be2c1333b81be3499ac40dd3132e0787b92d197c244a971b2b5d34cae94 WatchSource:0}: Error finding container 86175be2c1333b81be3499ac40dd3132e0787b92d197c244a971b2b5d34cae94: Status 404 returned error can't find the container with id 86175be2c1333b81be3499ac40dd3132e0787b92d197c244a971b2b5d34cae94 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471117 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc\") pod \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471179 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config\") pod \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471266 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd8dl\" (UniqueName: \"kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl\") pod \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471367 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config\") pod \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\" (UID: \"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5\") " Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vfk\" (UniqueName: \"kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk\") pod \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\" (UID: \"a1507653-41f3-4aa6-a6db-55f6c67abbd9\") " Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471577 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1507653-41f3-4aa6-a6db-55f6c67abbd9" (UID: "a1507653-41f3-4aa6-a6db-55f6c67abbd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.471926 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.472433 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config" (OuterVolumeSpecName: "config") pod "3f606d6d-b0c3-48e4-90cc-c1500fcdfba5" (UID: "3f606d6d-b0c3-48e4-90cc-c1500fcdfba5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.473121 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config" (OuterVolumeSpecName: "config") pod "a1507653-41f3-4aa6-a6db-55f6c67abbd9" (UID: "a1507653-41f3-4aa6-a6db-55f6c67abbd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.478579 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl" (OuterVolumeSpecName: "kube-api-access-rd8dl") pod "3f606d6d-b0c3-48e4-90cc-c1500fcdfba5" (UID: "3f606d6d-b0c3-48e4-90cc-c1500fcdfba5"). InnerVolumeSpecName "kube-api-access-rd8dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.491456 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk" (OuterVolumeSpecName: "kube-api-access-c6vfk") pod "a1507653-41f3-4aa6-a6db-55f6c67abbd9" (UID: "a1507653-41f3-4aa6-a6db-55f6c67abbd9"). InnerVolumeSpecName "kube-api-access-c6vfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.514113 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.526142 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.531830 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: W1128 06:37:41.533098 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf22c44d9_b740_4aaf_bf4f_19eea62e6b42.slice/crio-ed3b602cb3bd403aef0135df806b2b7fb8fff09a2cf9fb358cb6ccf8d3fc8fea WatchSource:0}: Error finding container ed3b602cb3bd403aef0135df806b2b7fb8fff09a2cf9fb358cb6ccf8d3fc8fea: Status 404 returned error can't find the container with id ed3b602cb3bd403aef0135df806b2b7fb8fff09a2cf9fb358cb6ccf8d3fc8fea Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.573342 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1507653-41f3-4aa6-a6db-55f6c67abbd9-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.573376 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd8dl\" (UniqueName: \"kubernetes.io/projected/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-kube-api-access-rd8dl\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.573385 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.573396 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vfk\" (UniqueName: \"kubernetes.io/projected/a1507653-41f3-4aa6-a6db-55f6c67abbd9-kube-api-access-c6vfk\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.607025 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 06:37:41 crc kubenswrapper[4955]: W1128 06:37:41.610113 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1ba1430_28cb_4bba_936d_00e8988eab09.slice/crio-5830bfc0221967838c6e4bf3cf3ce7fd00eaf502a18c38b5904aac851df86917 WatchSource:0}: Error finding container 5830bfc0221967838c6e4bf3cf3ce7fd00eaf502a18c38b5904aac851df86917: Status 404 returned error can't find the container with id 5830bfc0221967838c6e4bf3cf3ce7fd00eaf502a18c38b5904aac851df86917 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.688697 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p2bvh"] Nov 28 06:37:41 crc kubenswrapper[4955]: W1128 06:37:41.697028 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3963971f_dccf_42a8_9889_b5e122ee6809.slice/crio-6d0be537a6ab3a29a4c682e426b31aaebdfed81828f538db2ed10b9c87525096 WatchSource:0}: Error finding container 6d0be537a6ab3a29a4c682e426b31aaebdfed81828f538db2ed10b9c87525096: Status 404 returned error can't find the container with id 6d0be537a6ab3a29a4c682e426b31aaebdfed81828f538db2ed10b9c87525096 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.785378 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-nxwbb"] Nov 28 06:37:41 crc kubenswrapper[4955]: W1128 06:37:41.813016 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c1a8276_d93e_498f_94a2_e698b071f1ee.slice/crio-f23368760ca061b0e7197cf23c42655738e22ad3464d030cdca11ca7f9103564 WatchSource:0}: Error finding container f23368760ca061b0e7197cf23c42655738e22ad3464d030cdca11ca7f9103564: Status 404 returned error can't find the container with id f23368760ca061b0e7197cf23c42655738e22ad3464d030cdca11ca7f9103564 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.862379 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1ba1430-28cb-4bba-936d-00e8988eab09","Type":"ContainerStarted","Data":"5830bfc0221967838c6e4bf3cf3ce7fd00eaf502a18c38b5904aac851df86917"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.864581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerStarted","Data":"c109bada6b39e1d00accd6d831cce48494a6d223c2d0c733ce8311fb1b7b6429"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.866656 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nxwbb" event={"ID":"7c1a8276-d93e-498f-94a2-e698b071f1ee","Type":"ContainerStarted","Data":"f23368760ca061b0e7197cf23c42655738e22ad3464d030cdca11ca7f9103564"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.870758 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c","Type":"ContainerStarted","Data":"cbb5ca08a643ae18714a73a3d595c24e9fc68dda124004c1a2a3de8a7fb5eaa4"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.872085 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49e9a1c0-0f8a-43ad-8180-6ecb191c5850","Type":"ContainerStarted","Data":"12109ad1b919de4391a41a30680ae8ca031b80ca80c54d2638b33a94efd9ed31"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.874193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" event={"ID":"3f606d6d-b0c3-48e4-90cc-c1500fcdfba5","Type":"ContainerDied","Data":"bfa4d792d71c5fdce03adc19123f47491732df1078036b6db37f7cce893d90f2"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.874285 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kw878" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.877403 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"184e43b4-9c7c-4df1-b1a7-503ef8139459","Type":"ContainerStarted","Data":"cdda9d7c73cd2f23266f984aa4e51b277d6767fb71a4d4aae3f3a38121d880b2"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.882191 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p2bvh" event={"ID":"3963971f-dccf-42a8-9889-b5e122ee6809","Type":"ContainerStarted","Data":"6d0be537a6ab3a29a4c682e426b31aaebdfed81828f538db2ed10b9c87525096"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.884938 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b8f1a214-823b-4a75-ada2-b5973ad7abd6","Type":"ContainerStarted","Data":"86175be2c1333b81be3499ac40dd3132e0787b92d197c244a971b2b5d34cae94"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.886530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerStarted","Data":"ed3b602cb3bd403aef0135df806b2b7fb8fff09a2cf9fb358cb6ccf8d3fc8fea"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.897661 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da36284b-b2a1-4008-a19c-3916e99c0bec","Type":"ContainerStarted","Data":"323a36f8932b76c6669ce66ace0516ee58d8b5b88ecf190faf1f769968723cfa"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.900496 4955 generic.go:334] "Generic (PLEG): container finished" podID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerID="9574b891d56f4978ddea5cab1d96a9e9532ea7fbe964aa37d53df280225bb480" exitCode=0 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.900614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" event={"ID":"933a971c-fc88-4fd7-8a37-fe34d58f9963","Type":"ContainerDied","Data":"9574b891d56f4978ddea5cab1d96a9e9532ea7fbe964aa37d53df280225bb480"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.904359 4955 generic.go:334] "Generic (PLEG): container finished" podID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerID="006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316" exitCode=0 Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.904488 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" event={"ID":"f42cfc11-a12d-415e-8d9b-c4bcdde8e457","Type":"ContainerDied","Data":"006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.912261 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.912956 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cggpm" event={"ID":"a1507653-41f3-4aa6-a6db-55f6c67abbd9","Type":"ContainerDied","Data":"d2db88467e127985bb995d376774f8b946224d2f99ed7a83619ae1e33853fefd"} Nov 28 06:37:41 crc kubenswrapper[4955]: I1128 06:37:41.996546 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.009969 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kw878"] Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.067632 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.076971 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cggpm"] Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.923802 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" event={"ID":"933a971c-fc88-4fd7-8a37-fe34d58f9963","Type":"ContainerStarted","Data":"4f6d187bbe95ed4f093cc62c5465fe84978aa432fbbb6143c795485013365e79"} Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.924253 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.925983 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" event={"ID":"f42cfc11-a12d-415e-8d9b-c4bcdde8e457","Type":"ContainerStarted","Data":"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f"} Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.926221 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.950001 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" podStartSLOduration=3.174134519 podStartE2EDuration="16.949978153s" podCreationTimestamp="2025-11-28 06:37:26 +0000 UTC" firstStartedPulling="2025-11-28 06:37:26.864762472 +0000 UTC m=+969.454018042" lastFinishedPulling="2025-11-28 06:37:40.640606106 +0000 UTC m=+983.229861676" observedRunningTime="2025-11-28 06:37:42.942003746 +0000 UTC m=+985.531259336" watchObservedRunningTime="2025-11-28 06:37:42.949978153 +0000 UTC m=+985.539233723" Nov 28 06:37:42 crc kubenswrapper[4955]: I1128 06:37:42.968014 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" podStartSLOduration=3.473070449 podStartE2EDuration="16.967995746s" podCreationTimestamp="2025-11-28 06:37:26 +0000 UTC" firstStartedPulling="2025-11-28 06:37:27.113456652 +0000 UTC m=+969.702712232" lastFinishedPulling="2025-11-28 06:37:40.608381959 +0000 UTC m=+983.197637529" observedRunningTime="2025-11-28 06:37:42.961024037 +0000 UTC m=+985.550279617" watchObservedRunningTime="2025-11-28 06:37:42.967995746 +0000 UTC m=+985.557251316" Nov 28 06:37:43 crc kubenswrapper[4955]: I1128 06:37:43.713462 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f606d6d-b0c3-48e4-90cc-c1500fcdfba5" path="/var/lib/kubelet/pods/3f606d6d-b0c3-48e4-90cc-c1500fcdfba5/volumes" Nov 28 06:37:43 crc kubenswrapper[4955]: I1128 06:37:43.714333 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1507653-41f3-4aa6-a6db-55f6c67abbd9" path="/var/lib/kubelet/pods/a1507653-41f3-4aa6-a6db-55f6c67abbd9/volumes" Nov 28 06:37:49 crc kubenswrapper[4955]: I1128 06:37:49.992758 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da36284b-b2a1-4008-a19c-3916e99c0bec","Type":"ContainerStarted","Data":"65e1eed878442ea53b2a8660a19e4f57fbe2490a046cf02a2d92ec0d032b85fc"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.010736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49e9a1c0-0f8a-43ad-8180-6ecb191c5850","Type":"ContainerStarted","Data":"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.011170 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.013180 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c1a8276-d93e-498f-94a2-e698b071f1ee" containerID="e0d4d7317a18a9d3a4e305c67cc3b61ed16182db9289dfd1817af87385a1e74e" exitCode=0 Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.013230 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nxwbb" event={"ID":"7c1a8276-d93e-498f-94a2-e698b071f1ee","Type":"ContainerDied","Data":"e0d4d7317a18a9d3a4e305c67cc3b61ed16182db9289dfd1817af87385a1e74e"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.015922 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerStarted","Data":"d87219f0bc006bb8d8315faf869bcf63563d1364fc23cc98cb44a72498571ff2"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.017207 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c","Type":"ContainerStarted","Data":"d9689415d269e187d73a3872f836f07d06a5ea96fd606154d42172f886bf6f95"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.019308 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1ba1430-28cb-4bba-936d-00e8988eab09","Type":"ContainerStarted","Data":"94b86730839f2c2f95b471c27a073b6c32cda420a50f8ea7b9e67fa50c6984fb"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.030672 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p2bvh" event={"ID":"3963971f-dccf-42a8-9889-b5e122ee6809","Type":"ContainerStarted","Data":"bd277650a00dc5babe2a76b7e1c5572789b5aba28d4b9114853dd22e108fcfd9"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.030804 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-p2bvh" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.033084 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b8f1a214-823b-4a75-ada2-b5973ad7abd6","Type":"ContainerStarted","Data":"0e41af4e3525a52efc8d0aaea729878b19f1fdbb9f10e84680a11d89898283aa"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.033833 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.035848 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerStarted","Data":"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.038712 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"184e43b4-9c7c-4df1-b1a7-503ef8139459","Type":"ContainerStarted","Data":"7b0ec54b3c753047c60f0d333127c382c6f468f23d4887217d1a6adeab0c2c46"} Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.044244 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.839134708 podStartE2EDuration="19.044233633s" podCreationTimestamp="2025-11-28 06:37:32 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.514135046 +0000 UTC m=+984.103390606" lastFinishedPulling="2025-11-28 06:37:49.719233961 +0000 UTC m=+992.308489531" observedRunningTime="2025-11-28 06:37:51.025632593 +0000 UTC m=+993.614888163" watchObservedRunningTime="2025-11-28 06:37:51.044233633 +0000 UTC m=+993.633489203" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.116784 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=13.597844328 podStartE2EDuration="21.116767098s" podCreationTimestamp="2025-11-28 06:37:30 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.477832112 +0000 UTC m=+984.067087682" lastFinishedPulling="2025-11-28 06:37:48.996754882 +0000 UTC m=+991.586010452" observedRunningTime="2025-11-28 06:37:51.104655013 +0000 UTC m=+993.693910613" watchObservedRunningTime="2025-11-28 06:37:51.116767098 +0000 UTC m=+993.706022668" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.128837 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-p2bvh" podStartSLOduration=7.6893567130000005 podStartE2EDuration="15.128819361s" podCreationTimestamp="2025-11-28 06:37:36 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.700299286 +0000 UTC m=+984.289554866" lastFinishedPulling="2025-11-28 06:37:49.139761944 +0000 UTC m=+991.729017514" observedRunningTime="2025-11-28 06:37:51.124836487 +0000 UTC m=+993.714092057" watchObservedRunningTime="2025-11-28 06:37:51.128819361 +0000 UTC m=+993.718074931" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.361773 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.680646 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:37:51 crc kubenswrapper[4955]: I1128 06:37:51.726019 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.050849 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nxwbb" event={"ID":"7c1a8276-d93e-498f-94a2-e698b071f1ee","Type":"ContainerStarted","Data":"d37637f09b39812bcf941098895802b10bc6cf2bb9064372f4590d01df63df0b"} Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.050888 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-nxwbb" event={"ID":"7c1a8276-d93e-498f-94a2-e698b071f1ee","Type":"ContainerStarted","Data":"eda0ebe52d613b511895294fd7cc840f938884b4ed5a4b39c6f501047663b043"} Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.051012 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="dnsmasq-dns" containerID="cri-o://4f6d187bbe95ed4f093cc62c5465fe84978aa432fbbb6143c795485013365e79" gracePeriod=10 Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.053290 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.053319 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:37:52 crc kubenswrapper[4955]: I1128 06:37:52.078219 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-nxwbb" podStartSLOduration=8.039795989 podStartE2EDuration="15.078200119s" podCreationTimestamp="2025-11-28 06:37:37 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.815064393 +0000 UTC m=+984.404319973" lastFinishedPulling="2025-11-28 06:37:48.853468513 +0000 UTC m=+991.442724103" observedRunningTime="2025-11-28 06:37:52.075026689 +0000 UTC m=+994.664282279" watchObservedRunningTime="2025-11-28 06:37:52.078200119 +0000 UTC m=+994.667455689" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.058042 4955 generic.go:334] "Generic (PLEG): container finished" podID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerID="4f6d187bbe95ed4f093cc62c5465fe84978aa432fbbb6143c795485013365e79" exitCode=0 Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.058130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" event={"ID":"933a971c-fc88-4fd7-8a37-fe34d58f9963","Type":"ContainerDied","Data":"4f6d187bbe95ed4f093cc62c5465fe84978aa432fbbb6143c795485013365e79"} Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.385400 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.432131 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7zt6\" (UniqueName: \"kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6\") pod \"933a971c-fc88-4fd7-8a37-fe34d58f9963\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.432318 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc\") pod \"933a971c-fc88-4fd7-8a37-fe34d58f9963\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.432422 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config\") pod \"933a971c-fc88-4fd7-8a37-fe34d58f9963\" (UID: \"933a971c-fc88-4fd7-8a37-fe34d58f9963\") " Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.436763 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6" (OuterVolumeSpecName: "kube-api-access-q7zt6") pod "933a971c-fc88-4fd7-8a37-fe34d58f9963" (UID: "933a971c-fc88-4fd7-8a37-fe34d58f9963"). InnerVolumeSpecName "kube-api-access-q7zt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.475444 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config" (OuterVolumeSpecName: "config") pod "933a971c-fc88-4fd7-8a37-fe34d58f9963" (UID: "933a971c-fc88-4fd7-8a37-fe34d58f9963"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.476682 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "933a971c-fc88-4fd7-8a37-fe34d58f9963" (UID: "933a971c-fc88-4fd7-8a37-fe34d58f9963"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.534815 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7zt6\" (UniqueName: \"kubernetes.io/projected/933a971c-fc88-4fd7-8a37-fe34d58f9963-kube-api-access-q7zt6\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.534845 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:53 crc kubenswrapper[4955]: I1128 06:37:53.534856 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/933a971c-fc88-4fd7-8a37-fe34d58f9963-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.070933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f1ba1430-28cb-4bba-936d-00e8988eab09","Type":"ContainerStarted","Data":"c0eea3cd38e78761e746935df79a7cc4b1e1eda7f0a9743138989152c9877114"} Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.075084 4955 generic.go:334] "Generic (PLEG): container finished" podID="5ce3cc8f-9d19-49fa-83a9-d71cf669d26c" containerID="d9689415d269e187d73a3872f836f07d06a5ea96fd606154d42172f886bf6f95" exitCode=0 Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.075226 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c","Type":"ContainerDied","Data":"d9689415d269e187d73a3872f836f07d06a5ea96fd606154d42172f886bf6f95"} Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.077742 4955 generic.go:334] "Generic (PLEG): container finished" podID="da36284b-b2a1-4008-a19c-3916e99c0bec" containerID="65e1eed878442ea53b2a8660a19e4f57fbe2490a046cf02a2d92ec0d032b85fc" exitCode=0 Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.077842 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da36284b-b2a1-4008-a19c-3916e99c0bec","Type":"ContainerDied","Data":"65e1eed878442ea53b2a8660a19e4f57fbe2490a046cf02a2d92ec0d032b85fc"} Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.093987 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.093862 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-fffq5" event={"ID":"933a971c-fc88-4fd7-8a37-fe34d58f9963","Type":"ContainerDied","Data":"e4d14c0c24569e563f86979e5a4bba3e96c045382e1b6ff7437feea799b9ea2a"} Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.095103 4955 scope.go:117] "RemoveContainer" containerID="4f6d187bbe95ed4f093cc62c5465fe84978aa432fbbb6143c795485013365e79" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.098750 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"184e43b4-9c7c-4df1-b1a7-503ef8139459","Type":"ContainerStarted","Data":"c90bcfd128f38af2281f4a03833bb2d866ab62d94e014c654f56e803be7e26a4"} Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.112242 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.452119868 podStartE2EDuration="16.112217136s" podCreationTimestamp="2025-11-28 06:37:38 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.614234455 +0000 UTC m=+984.203490025" lastFinishedPulling="2025-11-28 06:37:53.274331723 +0000 UTC m=+995.863587293" observedRunningTime="2025-11-28 06:37:54.103265972 +0000 UTC m=+996.692521602" watchObservedRunningTime="2025-11-28 06:37:54.112217136 +0000 UTC m=+996.701472746" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.205074 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.141732722 podStartE2EDuration="16.205047029s" podCreationTimestamp="2025-11-28 06:37:38 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.228743331 +0000 UTC m=+983.817998901" lastFinishedPulling="2025-11-28 06:37:53.292057638 +0000 UTC m=+995.881313208" observedRunningTime="2025-11-28 06:37:54.196192707 +0000 UTC m=+996.785448307" watchObservedRunningTime="2025-11-28 06:37:54.205047029 +0000 UTC m=+996.794302609" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.247084 4955 scope.go:117] "RemoveContainer" containerID="9574b891d56f4978ddea5cab1d96a9e9532ea7fbe964aa37d53df280225bb480" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.271534 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.277772 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-fffq5"] Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.859517 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.859985 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:54 crc kubenswrapper[4955]: I1128 06:37:54.926014 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.107132 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5ce3cc8f-9d19-49fa-83a9-d71cf669d26c","Type":"ContainerStarted","Data":"e621c20e0006e2bfb7aa3b2ff6a7e506dc20866d28435de47683e067e8367a7e"} Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.109365 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"da36284b-b2a1-4008-a19c-3916e99c0bec","Type":"ContainerStarted","Data":"be3d37f6b6f3328662d37520c9e7168de89f65fcfb2abacdd394d18688ac051c"} Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.139363 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=18.105649102 podStartE2EDuration="26.139334188s" podCreationTimestamp="2025-11-28 06:37:29 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.543143461 +0000 UTC m=+984.132399031" lastFinishedPulling="2025-11-28 06:37:49.576828547 +0000 UTC m=+992.166084117" observedRunningTime="2025-11-28 06:37:55.138517745 +0000 UTC m=+997.727773335" watchObservedRunningTime="2025-11-28 06:37:55.139334188 +0000 UTC m=+997.728589798" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.148946 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.149057 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.166859 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=20.446781924 podStartE2EDuration="28.166832481s" podCreationTimestamp="2025-11-28 06:37:27 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.133413426 +0000 UTC m=+983.722668996" lastFinishedPulling="2025-11-28 06:37:48.853463983 +0000 UTC m=+991.442719553" observedRunningTime="2025-11-28 06:37:55.166056969 +0000 UTC m=+997.755312619" watchObservedRunningTime="2025-11-28 06:37:55.166832481 +0000 UTC m=+997.756088071" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.180895 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.204649 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.371041 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l7qv2"] Nov 28 06:37:55 crc kubenswrapper[4955]: E1128 06:37:55.371413 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="init" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.371437 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="init" Nov 28 06:37:55 crc kubenswrapper[4955]: E1128 06:37:55.371461 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="dnsmasq-dns" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.371469 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="dnsmasq-dns" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.371644 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" containerName="dnsmasq-dns" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.372445 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.374604 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.381313 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l7qv2"] Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.451068 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vqt82"] Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.452033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.453950 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.469732 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.469792 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n4jd\" (UniqueName: \"kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.470144 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqt82"] Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.470341 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.470688 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572565 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n4jd\" (UniqueName: \"kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572684 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3543aa49-473d-4e57-a9eb-edbca5c7f58d-config\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572740 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovs-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572777 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9ngj\" (UniqueName: \"kubernetes.io/projected/3543aa49-473d-4e57-a9eb-edbca5c7f58d-kube-api-access-w9ngj\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572795 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572818 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572843 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.572863 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovn-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.573711 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.574637 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.575141 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.592354 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l7qv2"] Nov 28 06:37:55 crc kubenswrapper[4955]: E1128 06:37:55.592887 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-8n4jd], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" podUID="743a2a84-ec78-450c-a82a-e3ab23f56967" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.602762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n4jd\" (UniqueName: \"kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd\") pod \"dnsmasq-dns-7fd796d7df-l7qv2\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.612513 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.613762 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.618321 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.625895 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674393 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovn-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674872 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674912 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3543aa49-473d-4e57-a9eb-edbca5c7f58d-config\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovs-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.674996 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9ngj\" (UniqueName: \"kubernetes.io/projected/3543aa49-473d-4e57-a9eb-edbca5c7f58d-kube-api-access-w9ngj\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.676237 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovn-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.676315 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/3543aa49-473d-4e57-a9eb-edbca5c7f58d-ovs-rundir\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.676862 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3543aa49-473d-4e57-a9eb-edbca5c7f58d-config\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.680476 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.680649 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3543aa49-473d-4e57-a9eb-edbca5c7f58d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.713257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9ngj\" (UniqueName: \"kubernetes.io/projected/3543aa49-473d-4e57-a9eb-edbca5c7f58d-kube-api-access-w9ngj\") pod \"ovn-controller-metrics-vqt82\" (UID: \"3543aa49-473d-4e57-a9eb-edbca5c7f58d\") " pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.724190 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="933a971c-fc88-4fd7-8a37-fe34d58f9963" path="/var/lib/kubelet/pods/933a971c-fc88-4fd7-8a37-fe34d58f9963/volumes" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776282 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmn9l\" (UniqueName: \"kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776356 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776459 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776483 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776548 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.776737 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqt82" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.892743 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.892946 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.892987 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.893079 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.893190 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmn9l\" (UniqueName: \"kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.894541 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.894797 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.895170 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.898743 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.909807 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmn9l\" (UniqueName: \"kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l\") pod \"dnsmasq-dns-86db49b7ff-zlllc\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:55 crc kubenswrapper[4955]: I1128 06:37:55.956093 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.056551 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.130158 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.148788 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.186037 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.248197 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqt82"] Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300016 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb\") pod \"743a2a84-ec78-450c-a82a-e3ab23f56967\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300090 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n4jd\" (UniqueName: \"kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd\") pod \"743a2a84-ec78-450c-a82a-e3ab23f56967\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300236 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc\") pod \"743a2a84-ec78-450c-a82a-e3ab23f56967\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config\") pod \"743a2a84-ec78-450c-a82a-e3ab23f56967\" (UID: \"743a2a84-ec78-450c-a82a-e3ab23f56967\") " Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300469 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "743a2a84-ec78-450c-a82a-e3ab23f56967" (UID: "743a2a84-ec78-450c-a82a-e3ab23f56967"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300836 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.300864 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config" (OuterVolumeSpecName: "config") pod "743a2a84-ec78-450c-a82a-e3ab23f56967" (UID: "743a2a84-ec78-450c-a82a-e3ab23f56967"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.301281 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "743a2a84-ec78-450c-a82a-e3ab23f56967" (UID: "743a2a84-ec78-450c-a82a-e3ab23f56967"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.303036 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd" (OuterVolumeSpecName: "kube-api-access-8n4jd") pod "743a2a84-ec78-450c-a82a-e3ab23f56967" (UID: "743a2a84-ec78-450c-a82a-e3ab23f56967"). InnerVolumeSpecName "kube-api-access-8n4jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.410341 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.410375 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743a2a84-ec78-450c-a82a-e3ab23f56967-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.410385 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n4jd\" (UniqueName: \"kubernetes.io/projected/743a2a84-ec78-450c-a82a-e3ab23f56967-kube-api-access-8n4jd\") on node \"crc\" DevicePath \"\"" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.410407 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.411705 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.414842 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.415002 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.415099 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.415175 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-2ppgg" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.423694 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.464136 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:37:56 crc kubenswrapper[4955]: W1128 06:37:56.476846 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8800bd8_704b_4cf0_b984_edaebf47b963.slice/crio-d53e5bce47b04587d28f06bf48768c51d9a367c32e154bcb14e93925356e6e5e WatchSource:0}: Error finding container d53e5bce47b04587d28f06bf48768c51d9a367c32e154bcb14e93925356e6e5e: Status 404 returned error can't find the container with id d53e5bce47b04587d28f06bf48768c51d9a367c32e154bcb14e93925356e6e5e Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.511944 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-scripts\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512031 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512052 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx99d\" (UniqueName: \"kubernetes.io/projected/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-kube-api-access-zx99d\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512092 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512106 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512208 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-config\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.512230 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613461 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-config\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613528 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613601 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-scripts\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613629 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613647 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx99d\" (UniqueName: \"kubernetes.io/projected/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-kube-api-access-zx99d\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613699 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.613714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.614440 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.614884 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-config\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.616249 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-scripts\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.618310 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.618578 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.619667 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.629151 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx99d\" (UniqueName: \"kubernetes.io/projected/88f30640-ea6e-4479-b4ab-4e21f96f7ddb-kube-api-access-zx99d\") pod \"ovn-northd-0\" (UID: \"88f30640-ea6e-4479-b4ab-4e21f96f7ddb\") " pod="openstack/ovn-northd-0" Nov 28 06:37:56 crc kubenswrapper[4955]: I1128 06:37:56.789207 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.139557 4955 generic.go:334] "Generic (PLEG): container finished" podID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerID="f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0" exitCode=0 Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.139639 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" event={"ID":"c8800bd8-704b-4cf0-b984-edaebf47b963","Type":"ContainerDied","Data":"f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0"} Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.140082 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" event={"ID":"c8800bd8-704b-4cf0-b984-edaebf47b963","Type":"ContainerStarted","Data":"d53e5bce47b04587d28f06bf48768c51d9a367c32e154bcb14e93925356e6e5e"} Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.142173 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqt82" event={"ID":"3543aa49-473d-4e57-a9eb-edbca5c7f58d","Type":"ContainerStarted","Data":"83a8eb3c3ac56c1d516f9f10958f3603aec72d7ddceb52a650fe6bed78d47cd7"} Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.142223 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-l7qv2" Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.142203 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqt82" event={"ID":"3543aa49-473d-4e57-a9eb-edbca5c7f58d","Type":"ContainerStarted","Data":"bcaa8009244b76d0a035256e42eda1f5d5bbcf16e08ece8630124085cd3dfebd"} Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.202407 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vqt82" podStartSLOduration=2.202384152 podStartE2EDuration="2.202384152s" podCreationTimestamp="2025-11-28 06:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:37:57.201391654 +0000 UTC m=+999.790647224" watchObservedRunningTime="2025-11-28 06:37:57.202384152 +0000 UTC m=+999.791639722" Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.237575 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.325091 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l7qv2"] Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.330930 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-l7qv2"] Nov 28 06:37:57 crc kubenswrapper[4955]: I1128 06:37:57.718646 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743a2a84-ec78-450c-a82a-e3ab23f56967" path="/var/lib/kubelet/pods/743a2a84-ec78-450c-a82a-e3ab23f56967/volumes" Nov 28 06:37:58 crc kubenswrapper[4955]: I1128 06:37:58.151604 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88f30640-ea6e-4479-b4ab-4e21f96f7ddb","Type":"ContainerStarted","Data":"45c648326ab97979bc31302e605c139ac4b1f14c64bc18d990dd736b75444815"} Nov 28 06:37:59 crc kubenswrapper[4955]: I1128 06:37:59.377856 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 06:37:59 crc kubenswrapper[4955]: I1128 06:37:59.378260 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 06:38:00 crc kubenswrapper[4955]: I1128 06:38:00.769492 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 06:38:00 crc kubenswrapper[4955]: I1128 06:38:00.769582 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 06:38:02 crc kubenswrapper[4955]: I1128 06:38:02.892054 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 06:38:02 crc kubenswrapper[4955]: I1128 06:38:02.941388 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.009733 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.010925 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.023672 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.128146 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.128196 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.128224 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.128260 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kf69\" (UniqueName: \"kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.128301 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.229457 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kf69\" (UniqueName: \"kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.229535 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.229598 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.229623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.229649 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.230431 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.230496 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.230837 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.231243 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.253844 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kf69\" (UniqueName: \"kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69\") pod \"dnsmasq-dns-698758b865-xjtfj\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.347168 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:03 crc kubenswrapper[4955]: I1128 06:38:03.800008 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:03 crc kubenswrapper[4955]: W1128 06:38:03.802087 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c14d08e_06cc_409c_84d7_dad9fcfc4835.slice/crio-324f3bf11141822e8e8f986502a9bb10d48cd30b0038c696b3fb18d996fb3b22 WatchSource:0}: Error finding container 324f3bf11141822e8e8f986502a9bb10d48cd30b0038c696b3fb18d996fb3b22: Status 404 returned error can't find the container with id 324f3bf11141822e8e8f986502a9bb10d48cd30b0038c696b3fb18d996fb3b22 Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.065391 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.077945 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.081011 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.082411 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vrgpw" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.082730 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.083000 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.083248 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.157551 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-lock\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.158195 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-cache\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.158287 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.158378 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxjv\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-kube-api-access-lzxjv\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.158458 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.206453 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" event={"ID":"c8800bd8-704b-4cf0-b984-edaebf47b963","Type":"ContainerStarted","Data":"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f"} Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.206671 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="dnsmasq-dns" containerID="cri-o://06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f" gracePeriod=10 Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.206689 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.208453 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xjtfj" event={"ID":"8c14d08e-06cc-409c-84d7-dad9fcfc4835","Type":"ContainerStarted","Data":"324f3bf11141822e8e8f986502a9bb10d48cd30b0038c696b3fb18d996fb3b22"} Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.228487 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" podStartSLOduration=9.228470722 podStartE2EDuration="9.228470722s" podCreationTimestamp="2025-11-28 06:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:04.225842507 +0000 UTC m=+1006.815098077" watchObservedRunningTime="2025-11-28 06:38:04.228470722 +0000 UTC m=+1006.817726292" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.260436 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxjv\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-kube-api-access-lzxjv\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.260585 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.260637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-lock\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.260747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-cache\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.260773 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.260773 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.260802 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.260860 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift podName:0b38ef12-050e-4f3e-9b92-79ad3baba7d7 nodeName:}" failed. No retries permitted until 2025-11-28 06:38:04.760835542 +0000 UTC m=+1007.350091102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift") pod "swift-storage-0" (UID: "0b38ef12-050e-4f3e-9b92-79ad3baba7d7") : configmap "swift-ring-files" not found Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.261277 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.261593 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-cache\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.261717 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-lock\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.286959 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.295776 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxjv\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-kube-api-access-lzxjv\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.410836 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.498734 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="da36284b-b2a1-4008-a19c-3916e99c0bec" containerName="galera" probeResult="failure" output=< Nov 28 06:38:04 crc kubenswrapper[4955]: wsrep_local_state_comment (Joined) differs from Synced Nov 28 06:38:04 crc kubenswrapper[4955]: > Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.583047 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.690656 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.770896 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.770929 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 06:38:04 crc kubenswrapper[4955]: I1128 06:38:04.770955 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:04 crc kubenswrapper[4955]: E1128 06:38:04.770993 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift podName:0b38ef12-050e-4f3e-9b92-79ad3baba7d7 nodeName:}" failed. No retries permitted until 2025-11-28 06:38:05.770974556 +0000 UTC m=+1008.360230136 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift") pod "swift-storage-0" (UID: "0b38ef12-050e-4f3e-9b92-79ad3baba7d7") : configmap "swift-ring-files" not found Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.112890 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.179815 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmn9l\" (UniqueName: \"kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l\") pod \"c8800bd8-704b-4cf0-b984-edaebf47b963\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.179872 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config\") pod \"c8800bd8-704b-4cf0-b984-edaebf47b963\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.179961 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb\") pod \"c8800bd8-704b-4cf0-b984-edaebf47b963\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.180011 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb\") pod \"c8800bd8-704b-4cf0-b984-edaebf47b963\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.180042 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc\") pod \"c8800bd8-704b-4cf0-b984-edaebf47b963\" (UID: \"c8800bd8-704b-4cf0-b984-edaebf47b963\") " Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.189860 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l" (OuterVolumeSpecName: "kube-api-access-vmn9l") pod "c8800bd8-704b-4cf0-b984-edaebf47b963" (UID: "c8800bd8-704b-4cf0-b984-edaebf47b963"). InnerVolumeSpecName "kube-api-access-vmn9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.220692 4955 generic.go:334] "Generic (PLEG): container finished" podID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerID="23c9d8946e10dd61c42eab2dc0d24cb635b854d9311849117480e4f89040d8d7" exitCode=0 Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.220774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xjtfj" event={"ID":"8c14d08e-06cc-409c-84d7-dad9fcfc4835","Type":"ContainerDied","Data":"23c9d8946e10dd61c42eab2dc0d24cb635b854d9311849117480e4f89040d8d7"} Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.224096 4955 generic.go:334] "Generic (PLEG): container finished" podID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerID="06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f" exitCode=0 Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.224265 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" event={"ID":"c8800bd8-704b-4cf0-b984-edaebf47b963","Type":"ContainerDied","Data":"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f"} Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.224367 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" event={"ID":"c8800bd8-704b-4cf0-b984-edaebf47b963","Type":"ContainerDied","Data":"d53e5bce47b04587d28f06bf48768c51d9a367c32e154bcb14e93925356e6e5e"} Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.224459 4955 scope.go:117] "RemoveContainer" containerID="06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.224274 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-zlllc" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.250753 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8800bd8-704b-4cf0-b984-edaebf47b963" (UID: "c8800bd8-704b-4cf0-b984-edaebf47b963"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.254215 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8800bd8-704b-4cf0-b984-edaebf47b963" (UID: "c8800bd8-704b-4cf0-b984-edaebf47b963"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.257998 4955 scope.go:117] "RemoveContainer" containerID="f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.274732 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config" (OuterVolumeSpecName: "config") pod "c8800bd8-704b-4cf0-b984-edaebf47b963" (UID: "c8800bd8-704b-4cf0-b984-edaebf47b963"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.282666 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8800bd8-704b-4cf0-b984-edaebf47b963" (UID: "c8800bd8-704b-4cf0-b984-edaebf47b963"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.283826 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.283847 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmn9l\" (UniqueName: \"kubernetes.io/projected/c8800bd8-704b-4cf0-b984-edaebf47b963-kube-api-access-vmn9l\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.283857 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.283868 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.283876 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8800bd8-704b-4cf0-b984-edaebf47b963-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.397980 4955 scope.go:117] "RemoveContainer" containerID="06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f" Nov 28 06:38:05 crc kubenswrapper[4955]: E1128 06:38:05.398498 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f\": container with ID starting with 06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f not found: ID does not exist" containerID="06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.398541 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f"} err="failed to get container status \"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f\": rpc error: code = NotFound desc = could not find container \"06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f\": container with ID starting with 06a82170a06b351c98ca7e4ba795a2a8abcc087e5bdbbdb8e5bbb54b44199b3f not found: ID does not exist" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.398564 4955 scope.go:117] "RemoveContainer" containerID="f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0" Nov 28 06:38:05 crc kubenswrapper[4955]: E1128 06:38:05.399019 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0\": container with ID starting with f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0 not found: ID does not exist" containerID="f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.399062 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0"} err="failed to get container status \"f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0\": rpc error: code = NotFound desc = could not find container \"f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0\": container with ID starting with f516aed8871cba81c839000083d8e4da654f4925e875bfcc994a6e3c9d3ec2d0 not found: ID does not exist" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.554348 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.560497 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-zlllc"] Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.714544 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" path="/var/lib/kubelet/pods/c8800bd8-704b-4cf0-b984-edaebf47b963/volumes" Nov 28 06:38:05 crc kubenswrapper[4955]: I1128 06:38:05.796452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:05 crc kubenswrapper[4955]: E1128 06:38:05.796781 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 06:38:05 crc kubenswrapper[4955]: E1128 06:38:05.796813 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 06:38:05 crc kubenswrapper[4955]: E1128 06:38:05.796889 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift podName:0b38ef12-050e-4f3e-9b92-79ad3baba7d7 nodeName:}" failed. No retries permitted until 2025-11-28 06:38:07.796864113 +0000 UTC m=+1010.386119693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift") pod "swift-storage-0" (UID: "0b38ef12-050e-4f3e-9b92-79ad3baba7d7") : configmap "swift-ring-files" not found Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.255762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xjtfj" event={"ID":"8c14d08e-06cc-409c-84d7-dad9fcfc4835","Type":"ContainerStarted","Data":"42ae50065f84da9a1d85516a7f903e9828f8fb27c281d93ba4542e620505a0a9"} Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.255938 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.272119 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88f30640-ea6e-4479-b4ab-4e21f96f7ddb","Type":"ContainerStarted","Data":"980f7faec16d6144522d7532f5742af7e99f3d21daa148f74f35c1f8a5d046f1"} Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.272227 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88f30640-ea6e-4479-b4ab-4e21f96f7ddb","Type":"ContainerStarted","Data":"d41634dffc4aaeadd2b9b5419caeb0dc21d7137bc370f023e517baad51dc0c05"} Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.272344 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.290866 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-xjtfj" podStartSLOduration=4.290826466 podStartE2EDuration="4.290826466s" podCreationTimestamp="2025-11-28 06:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:06.277090395 +0000 UTC m=+1008.866346045" watchObservedRunningTime="2025-11-28 06:38:06.290826466 +0000 UTC m=+1008.880082046" Nov 28 06:38:06 crc kubenswrapper[4955]: I1128 06:38:06.315021 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.558254193 podStartE2EDuration="10.314999484s" podCreationTimestamp="2025-11-28 06:37:56 +0000 UTC" firstStartedPulling="2025-11-28 06:37:57.240177498 +0000 UTC m=+999.829433068" lastFinishedPulling="2025-11-28 06:38:04.996922789 +0000 UTC m=+1007.586178359" observedRunningTime="2025-11-28 06:38:06.303414624 +0000 UTC m=+1008.892670234" watchObservedRunningTime="2025-11-28 06:38:06.314999484 +0000 UTC m=+1008.904255064" Nov 28 06:38:07 crc kubenswrapper[4955]: I1128 06:38:07.832053 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:07 crc kubenswrapper[4955]: E1128 06:38:07.832278 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 06:38:07 crc kubenswrapper[4955]: E1128 06:38:07.832324 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 06:38:07 crc kubenswrapper[4955]: E1128 06:38:07.832415 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift podName:0b38ef12-050e-4f3e-9b92-79ad3baba7d7 nodeName:}" failed. No retries permitted until 2025-11-28 06:38:11.832385902 +0000 UTC m=+1014.421641512 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift") pod "swift-storage-0" (UID: "0b38ef12-050e-4f3e-9b92-79ad3baba7d7") : configmap "swift-ring-files" not found Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.168380 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5bmcj"] Nov 28 06:38:08 crc kubenswrapper[4955]: E1128 06:38:08.170023 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="dnsmasq-dns" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.170071 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="dnsmasq-dns" Nov 28 06:38:08 crc kubenswrapper[4955]: E1128 06:38:08.170367 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="init" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.170387 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="init" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.170982 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8800bd8-704b-4cf0-b984-edaebf47b963" containerName="dnsmasq-dns" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.187901 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.191048 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.191425 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.191448 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.216203 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-hf68n"] Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.218396 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.228806 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5bmcj"] Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241391 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241461 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241495 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241594 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241616 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8qsf\" (UniqueName: \"kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241665 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241709 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241732 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241767 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22zmp\" (UniqueName: \"kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241824 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241853 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.241904 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.252412 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5bmcj"] Nov 28 06:38:08 crc kubenswrapper[4955]: E1128 06:38:08.252856 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-22zmp ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-5bmcj" podUID="970b7c8e-b2df-4f21-855a-3c8e76a2b598" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.264697 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hf68n"] Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.288153 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.297900 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.343727 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.343781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.343886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.343937 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22zmp\" (UniqueName: \"kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.343973 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344035 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344078 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344112 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344185 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344298 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344346 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344381 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8qsf\" (UniqueName: \"kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344446 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.344297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.345759 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.346380 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.346538 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.346693 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.346712 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.349706 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.349757 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.350234 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.350664 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.350858 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.354401 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.364914 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22zmp\" (UniqueName: \"kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp\") pod \"swift-ring-rebalance-5bmcj\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.366376 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8qsf\" (UniqueName: \"kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf\") pod \"swift-ring-rebalance-hf68n\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.445849 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.445884 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.445922 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.445948 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22zmp\" (UniqueName: \"kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.445998 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.446061 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.446116 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts\") pod \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\" (UID: \"970b7c8e-b2df-4f21-855a-3c8e76a2b598\") " Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.446688 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.446856 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts" (OuterVolumeSpecName: "scripts") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.449659 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.450646 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp" (OuterVolumeSpecName: "kube-api-access-22zmp") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "kube-api-access-22zmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.452123 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.452633 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.454689 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "970b7c8e-b2df-4f21-855a-3c8e76a2b598" (UID: "970b7c8e-b2df-4f21-855a-3c8e76a2b598"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.547912 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.547956 4955 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.547969 4955 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.547982 4955 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/970b7c8e-b2df-4f21-855a-3c8e76a2b598-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.547995 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22zmp\" (UniqueName: \"kubernetes.io/projected/970b7c8e-b2df-4f21-855a-3c8e76a2b598-kube-api-access-22zmp\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.548008 4955 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/970b7c8e-b2df-4f21-855a-3c8e76a2b598-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.548021 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970b7c8e-b2df-4f21-855a-3c8e76a2b598-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.551990 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:08 crc kubenswrapper[4955]: I1128 06:38:08.814328 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-hf68n"] Nov 28 06:38:08 crc kubenswrapper[4955]: W1128 06:38:08.819020 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8cdb34d_d310_43c4_bdcd_83e12752f6ea.slice/crio-ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070 WatchSource:0}: Error finding container ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070: Status 404 returned error can't find the container with id ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070 Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.302903 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hf68n" event={"ID":"f8cdb34d-d310-43c4-bdcd-83e12752f6ea","Type":"ContainerStarted","Data":"ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070"} Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.302944 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5bmcj" Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.389694 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5bmcj"] Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.401557 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-5bmcj"] Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.490720 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 06:38:09 crc kubenswrapper[4955]: I1128 06:38:09.718543 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970b7c8e-b2df-4f21-855a-3c8e76a2b598" path="/var/lib/kubelet/pods/970b7c8e-b2df-4f21-855a-3c8e76a2b598/volumes" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.708633 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-97bc-account-create-update-tkw67"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.710090 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.712480 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.714622 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97bc-account-create-update-tkw67"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.761382 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xzbsd"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.762908 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.772328 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xzbsd"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.787141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgmbm\" (UniqueName: \"kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.787198 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwc5\" (UniqueName: \"kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.787275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.787394 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.888191 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgmbm\" (UniqueName: \"kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.888231 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vwc5\" (UniqueName: \"kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.888289 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.888332 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.889426 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.890016 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.906637 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vwc5\" (UniqueName: \"kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5\") pod \"keystone-db-create-xzbsd\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.907229 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgmbm\" (UniqueName: \"kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm\") pod \"keystone-97bc-account-create-update-tkw67\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.955875 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kb8tx"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.957583 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.964223 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kb8tx"] Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.991820 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:10 crc kubenswrapper[4955]: I1128 06:38:10.991928 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw9tz\" (UniqueName: \"kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.033747 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.061629 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ddf5-account-create-update-5lqm2"] Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.062717 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.064425 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.070613 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddf5-account-create-update-5lqm2"] Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.089906 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.093146 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.093209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.093268 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw9tz\" (UniqueName: \"kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.093290 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcv28\" (UniqueName: \"kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.093946 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.113700 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw9tz\" (UniqueName: \"kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz\") pod \"placement-db-create-kb8tx\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.195041 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.195200 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcv28\" (UniqueName: \"kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.196237 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.212986 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcv28\" (UniqueName: \"kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28\") pod \"placement-ddf5-account-create-update-5lqm2\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.302426 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:11 crc kubenswrapper[4955]: I1128 06:38:11.386250 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:12 crc kubenswrapper[4955]: I1128 06:38:12.070311 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:12 crc kubenswrapper[4955]: E1128 06:38:12.071399 4955 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 06:38:12 crc kubenswrapper[4955]: E1128 06:38:12.071434 4955 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 06:38:12 crc kubenswrapper[4955]: E1128 06:38:12.071563 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift podName:0b38ef12-050e-4f3e-9b92-79ad3baba7d7 nodeName:}" failed. No retries permitted until 2025-11-28 06:38:20.071487698 +0000 UTC m=+1022.660743308 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift") pod "swift-storage-0" (UID: "0b38ef12-050e-4f3e-9b92-79ad3baba7d7") : configmap "swift-ring-files" not found Nov 28 06:38:12 crc kubenswrapper[4955]: I1128 06:38:12.840242 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kb8tx"] Nov 28 06:38:12 crc kubenswrapper[4955]: I1128 06:38:12.865829 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-97bc-account-create-update-tkw67"] Nov 28 06:38:12 crc kubenswrapper[4955]: I1128 06:38:12.931366 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ddf5-account-create-update-5lqm2"] Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.022258 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xzbsd"] Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.344150 4955 generic.go:334] "Generic (PLEG): container finished" podID="b9e0ef99-b7f7-401a-a834-d0581bfd39e5" containerID="c845ba95fa41c18934ea5deb20e72e4992772baf01f1c0b2cc55aa4e8e59401e" exitCode=0 Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.344254 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bc-account-create-update-tkw67" event={"ID":"b9e0ef99-b7f7-401a-a834-d0581bfd39e5","Type":"ContainerDied","Data":"c845ba95fa41c18934ea5deb20e72e4992772baf01f1c0b2cc55aa4e8e59401e"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.345599 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bc-account-create-update-tkw67" event={"ID":"b9e0ef99-b7f7-401a-a834-d0581bfd39e5","Type":"ContainerStarted","Data":"91f9a1a864698eccdc44c5aa7316f4c3faa20566d779c617efc0bfa714ba505f"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.347280 4955 generic.go:334] "Generic (PLEG): container finished" podID="301210c2-d75a-4798-bca8-3a431ba13279" containerID="18ee145313623ca08cedfa3bc78566493cc1792cffdddfc68d998387d0c09880" exitCode=0 Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.347356 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kb8tx" event={"ID":"301210c2-d75a-4798-bca8-3a431ba13279","Type":"ContainerDied","Data":"18ee145313623ca08cedfa3bc78566493cc1792cffdddfc68d998387d0c09880"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.347383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kb8tx" event={"ID":"301210c2-d75a-4798-bca8-3a431ba13279","Type":"ContainerStarted","Data":"6a417422d740e46b427615c966ecd044610bc0b42cf0fcf56ceabd7c659439fe"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.348522 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.349320 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xzbsd" event={"ID":"0983c5c7-3f49-4ac0-b096-31df44191680","Type":"ContainerStarted","Data":"697cc3dce80600b8ffd76dc0d63640ce79c93d8fcf145090e80a9be5ad06027a"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.349416 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xzbsd" event={"ID":"0983c5c7-3f49-4ac0-b096-31df44191680","Type":"ContainerStarted","Data":"df35b4d61deb8c9194a16c7555dc94cca00aeb54ddf06639564907c64fe70f48"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.350963 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hf68n" event={"ID":"f8cdb34d-d310-43c4-bdcd-83e12752f6ea","Type":"ContainerStarted","Data":"e9de94a6758dae83a6c9a27c4469998cc078a76713a0bf147b3f04bb35731008"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.352786 4955 generic.go:334] "Generic (PLEG): container finished" podID="7f65108f-7b8b-41af-a2b6-d1893982504f" containerID="44403e446c9bb229c16c6b58bd407cd20b18f4ad3626ed23c9510fb0c2985533" exitCode=0 Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.352894 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf5-account-create-update-5lqm2" event={"ID":"7f65108f-7b8b-41af-a2b6-d1893982504f","Type":"ContainerDied","Data":"44403e446c9bb229c16c6b58bd407cd20b18f4ad3626ed23c9510fb0c2985533"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.352972 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf5-account-create-update-5lqm2" event={"ID":"7f65108f-7b8b-41af-a2b6-d1893982504f","Type":"ContainerStarted","Data":"e1039e289b18153c9a6cd94328de9802eab559c61a517d864d39a455961dd18a"} Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.434922 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-hf68n" podStartSLOduration=1.8390269209999999 podStartE2EDuration="5.434893664s" podCreationTimestamp="2025-11-28 06:38:08 +0000 UTC" firstStartedPulling="2025-11-28 06:38:08.821187533 +0000 UTC m=+1011.410443103" lastFinishedPulling="2025-11-28 06:38:12.417054276 +0000 UTC m=+1015.006309846" observedRunningTime="2025-11-28 06:38:13.434149562 +0000 UTC m=+1016.023405132" watchObservedRunningTime="2025-11-28 06:38:13.434893664 +0000 UTC m=+1016.024149264" Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.468207 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.468483 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="dnsmasq-dns" containerID="cri-o://da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f" gracePeriod=10 Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.469762 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-xzbsd" podStartSLOduration=3.469749366 podStartE2EDuration="3.469749366s" podCreationTimestamp="2025-11-28 06:38:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:13.469267882 +0000 UTC m=+1016.058523482" watchObservedRunningTime="2025-11-28 06:38:13.469749366 +0000 UTC m=+1016.059004946" Nov 28 06:38:13 crc kubenswrapper[4955]: I1128 06:38:13.926535 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.002770 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config\") pod \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.002964 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7llk\" (UniqueName: \"kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk\") pod \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.003013 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc\") pod \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\" (UID: \"f42cfc11-a12d-415e-8d9b-c4bcdde8e457\") " Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.008049 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk" (OuterVolumeSpecName: "kube-api-access-p7llk") pod "f42cfc11-a12d-415e-8d9b-c4bcdde8e457" (UID: "f42cfc11-a12d-415e-8d9b-c4bcdde8e457"). InnerVolumeSpecName "kube-api-access-p7llk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.035918 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f42cfc11-a12d-415e-8d9b-c4bcdde8e457" (UID: "f42cfc11-a12d-415e-8d9b-c4bcdde8e457"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.042753 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config" (OuterVolumeSpecName: "config") pod "f42cfc11-a12d-415e-8d9b-c4bcdde8e457" (UID: "f42cfc11-a12d-415e-8d9b-c4bcdde8e457"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.105122 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7llk\" (UniqueName: \"kubernetes.io/projected/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-kube-api-access-p7llk\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.105159 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.105171 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42cfc11-a12d-415e-8d9b-c4bcdde8e457-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.368618 4955 generic.go:334] "Generic (PLEG): container finished" podID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerID="da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f" exitCode=0 Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.368752 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.370977 4955 generic.go:334] "Generic (PLEG): container finished" podID="0983c5c7-3f49-4ac0-b096-31df44191680" containerID="697cc3dce80600b8ffd76dc0d63640ce79c93d8fcf145090e80a9be5ad06027a" exitCode=0 Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.373535 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" event={"ID":"f42cfc11-a12d-415e-8d9b-c4bcdde8e457","Type":"ContainerDied","Data":"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f"} Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.373762 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-7tl74" event={"ID":"f42cfc11-a12d-415e-8d9b-c4bcdde8e457","Type":"ContainerDied","Data":"199170d4c0558f095717a642f2a1ca0c173c3a7c5a00d0d8439f35bee7448b73"} Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.373911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xzbsd" event={"ID":"0983c5c7-3f49-4ac0-b096-31df44191680","Type":"ContainerDied","Data":"697cc3dce80600b8ffd76dc0d63640ce79c93d8fcf145090e80a9be5ad06027a"} Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.374114 4955 scope.go:117] "RemoveContainer" containerID="da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.415656 4955 scope.go:117] "RemoveContainer" containerID="006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.424745 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.446558 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-7tl74"] Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.454975 4955 scope.go:117] "RemoveContainer" containerID="da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f" Nov 28 06:38:14 crc kubenswrapper[4955]: E1128 06:38:14.455500 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f\": container with ID starting with da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f not found: ID does not exist" containerID="da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.455540 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f"} err="failed to get container status \"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f\": rpc error: code = NotFound desc = could not find container \"da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f\": container with ID starting with da16e64fc379c84d14574839d6a92fc4e4601b45f712648a4a9b7d8584e5f85f not found: ID does not exist" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.455562 4955 scope.go:117] "RemoveContainer" containerID="006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316" Nov 28 06:38:14 crc kubenswrapper[4955]: E1128 06:38:14.456005 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316\": container with ID starting with 006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316 not found: ID does not exist" containerID="006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.456054 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316"} err="failed to get container status \"006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316\": rpc error: code = NotFound desc = could not find container \"006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316\": container with ID starting with 006978ecd8e47907ad378d9f969b69c61db59694e9289c5352f4db0b7afe2316 not found: ID does not exist" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.807252 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.859470 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.868203 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.935325 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts\") pod \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.935461 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgmbm\" (UniqueName: \"kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm\") pod \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\" (UID: \"b9e0ef99-b7f7-401a-a834-d0581bfd39e5\") " Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.935782 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9e0ef99-b7f7-401a-a834-d0581bfd39e5" (UID: "b9e0ef99-b7f7-401a-a834-d0581bfd39e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.935984 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:14 crc kubenswrapper[4955]: I1128 06:38:14.938858 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm" (OuterVolumeSpecName: "kube-api-access-sgmbm") pod "b9e0ef99-b7f7-401a-a834-d0581bfd39e5" (UID: "b9e0ef99-b7f7-401a-a834-d0581bfd39e5"). InnerVolumeSpecName "kube-api-access-sgmbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.037228 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts\") pod \"7f65108f-7b8b-41af-a2b6-d1893982504f\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.037297 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts\") pod \"301210c2-d75a-4798-bca8-3a431ba13279\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.037391 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcv28\" (UniqueName: \"kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28\") pod \"7f65108f-7b8b-41af-a2b6-d1893982504f\" (UID: \"7f65108f-7b8b-41af-a2b6-d1893982504f\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.037795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw9tz\" (UniqueName: \"kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz\") pod \"301210c2-d75a-4798-bca8-3a431ba13279\" (UID: \"301210c2-d75a-4798-bca8-3a431ba13279\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.038088 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgmbm\" (UniqueName: \"kubernetes.io/projected/b9e0ef99-b7f7-401a-a834-d0581bfd39e5-kube-api-access-sgmbm\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.038228 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f65108f-7b8b-41af-a2b6-d1893982504f" (UID: "7f65108f-7b8b-41af-a2b6-d1893982504f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.039175 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "301210c2-d75a-4798-bca8-3a431ba13279" (UID: "301210c2-d75a-4798-bca8-3a431ba13279"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.040814 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz" (OuterVolumeSpecName: "kube-api-access-gw9tz") pod "301210c2-d75a-4798-bca8-3a431ba13279" (UID: "301210c2-d75a-4798-bca8-3a431ba13279"). InnerVolumeSpecName "kube-api-access-gw9tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.042363 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28" (OuterVolumeSpecName: "kube-api-access-hcv28") pod "7f65108f-7b8b-41af-a2b6-d1893982504f" (UID: "7f65108f-7b8b-41af-a2b6-d1893982504f"). InnerVolumeSpecName "kube-api-access-hcv28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.139457 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcv28\" (UniqueName: \"kubernetes.io/projected/7f65108f-7b8b-41af-a2b6-d1893982504f-kube-api-access-hcv28\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.139499 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw9tz\" (UniqueName: \"kubernetes.io/projected/301210c2-d75a-4798-bca8-3a431ba13279-kube-api-access-gw9tz\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.139540 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f65108f-7b8b-41af-a2b6-d1893982504f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.139559 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301210c2-d75a-4798-bca8-3a431ba13279-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.390026 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kb8tx" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.390023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kb8tx" event={"ID":"301210c2-d75a-4798-bca8-3a431ba13279","Type":"ContainerDied","Data":"6a417422d740e46b427615c966ecd044610bc0b42cf0fcf56ceabd7c659439fe"} Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.390136 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a417422d740e46b427615c966ecd044610bc0b42cf0fcf56ceabd7c659439fe" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.394851 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ddf5-account-create-update-5lqm2" event={"ID":"7f65108f-7b8b-41af-a2b6-d1893982504f","Type":"ContainerDied","Data":"e1039e289b18153c9a6cd94328de9802eab559c61a517d864d39a455961dd18a"} Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.394887 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ddf5-account-create-update-5lqm2" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.394894 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1039e289b18153c9a6cd94328de9802eab559c61a517d864d39a455961dd18a" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.396465 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-97bc-account-create-update-tkw67" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.396603 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-97bc-account-create-update-tkw67" event={"ID":"b9e0ef99-b7f7-401a-a834-d0581bfd39e5","Type":"ContainerDied","Data":"91f9a1a864698eccdc44c5aa7316f4c3faa20566d779c617efc0bfa714ba505f"} Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.396711 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f9a1a864698eccdc44c5aa7316f4c3faa20566d779c617efc0bfa714ba505f" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.723049 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" path="/var/lib/kubelet/pods/f42cfc11-a12d-415e-8d9b-c4bcdde8e457/volumes" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.841595 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.865301 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts\") pod \"0983c5c7-3f49-4ac0-b096-31df44191680\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.865398 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwc5\" (UniqueName: \"kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5\") pod \"0983c5c7-3f49-4ac0-b096-31df44191680\" (UID: \"0983c5c7-3f49-4ac0-b096-31df44191680\") " Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.867898 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0983c5c7-3f49-4ac0-b096-31df44191680" (UID: "0983c5c7-3f49-4ac0-b096-31df44191680"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.872384 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0983c5c7-3f49-4ac0-b096-31df44191680-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.879371 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5" (OuterVolumeSpecName: "kube-api-access-5vwc5") pod "0983c5c7-3f49-4ac0-b096-31df44191680" (UID: "0983c5c7-3f49-4ac0-b096-31df44191680"). InnerVolumeSpecName "kube-api-access-5vwc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:15 crc kubenswrapper[4955]: I1128 06:38:15.974550 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vwc5\" (UniqueName: \"kubernetes.io/projected/0983c5c7-3f49-4ac0-b096-31df44191680-kube-api-access-5vwc5\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.275563 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lc5bw"] Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.275935 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301210c2-d75a-4798-bca8-3a431ba13279" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.275955 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="301210c2-d75a-4798-bca8-3a431ba13279" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.275971 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f65108f-7b8b-41af-a2b6-d1893982504f" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.275980 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f65108f-7b8b-41af-a2b6-d1893982504f" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.275999 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0983c5c7-3f49-4ac0-b096-31df44191680" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276009 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0983c5c7-3f49-4ac0-b096-31df44191680" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.276018 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="dnsmasq-dns" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276025 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="dnsmasq-dns" Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.276045 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="init" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276052 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="init" Nov 28 06:38:16 crc kubenswrapper[4955]: E1128 06:38:16.276062 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e0ef99-b7f7-401a-a834-d0581bfd39e5" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276070 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e0ef99-b7f7-401a-a834-d0581bfd39e5" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276260 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f42cfc11-a12d-415e-8d9b-c4bcdde8e457" containerName="dnsmasq-dns" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276278 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="301210c2-d75a-4798-bca8-3a431ba13279" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276292 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0983c5c7-3f49-4ac0-b096-31df44191680" containerName="mariadb-database-create" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276305 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e0ef99-b7f7-401a-a834-d0581bfd39e5" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276314 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f65108f-7b8b-41af-a2b6-d1893982504f" containerName="mariadb-account-create-update" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.276834 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.288886 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-06a9-account-create-update-x2rxg"] Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.289816 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.293212 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.297810 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lc5bw"] Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.313999 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-x2rxg"] Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.379928 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lch4c\" (UniqueName: \"kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.379993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.380032 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.380339 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v48x\" (UniqueName: \"kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.409312 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xzbsd" event={"ID":"0983c5c7-3f49-4ac0-b096-31df44191680","Type":"ContainerDied","Data":"df35b4d61deb8c9194a16c7555dc94cca00aeb54ddf06639564907c64fe70f48"} Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.409366 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df35b4d61deb8c9194a16c7555dc94cca00aeb54ddf06639564907c64fe70f48" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.409377 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xzbsd" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.482787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.482889 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.483133 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v48x\" (UniqueName: \"kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.483708 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lch4c\" (UniqueName: \"kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.484078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.483796 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.501034 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lch4c\" (UniqueName: \"kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c\") pod \"glance-06a9-account-create-update-x2rxg\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.512614 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v48x\" (UniqueName: \"kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x\") pod \"glance-db-create-lc5bw\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.597645 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.614105 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:16 crc kubenswrapper[4955]: I1128 06:38:16.858210 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.107075 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lc5bw"] Nov 28 06:38:17 crc kubenswrapper[4955]: W1128 06:38:17.112217 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda218d231_afdd_433f_9ce9_a8b50e3b3631.slice/crio-7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2 WatchSource:0}: Error finding container 7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2: Status 404 returned error can't find the container with id 7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2 Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.182192 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-06a9-account-create-update-x2rxg"] Nov 28 06:38:17 crc kubenswrapper[4955]: W1128 06:38:17.194340 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3e480c_0b9f_4b17_904a_4fd047194f99.slice/crio-ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd WatchSource:0}: Error finding container ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd: Status 404 returned error can't find the container with id ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.419092 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lc5bw" event={"ID":"a218d231-afdd-433f-9ce9-a8b50e3b3631","Type":"ContainerStarted","Data":"d22e93b6f0552960fcf92e5666ee86a336f2505165d71bfc002d1bf8c6dde707"} Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.419140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lc5bw" event={"ID":"a218d231-afdd-433f-9ce9-a8b50e3b3631","Type":"ContainerStarted","Data":"7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2"} Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.421949 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-x2rxg" event={"ID":"7e3e480c-0b9f-4b17-904a-4fd047194f99","Type":"ContainerStarted","Data":"a87e624e918f15aed1e173f4e7cdaaead50289f2423da5c14e210fd29c53bf7e"} Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.421978 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-x2rxg" event={"ID":"7e3e480c-0b9f-4b17-904a-4fd047194f99","Type":"ContainerStarted","Data":"ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd"} Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.441315 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-lc5bw" podStartSLOduration=1.441291334 podStartE2EDuration="1.441291334s" podCreationTimestamp="2025-11-28 06:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:17.431717401 +0000 UTC m=+1020.020973001" watchObservedRunningTime="2025-11-28 06:38:17.441291334 +0000 UTC m=+1020.030546934" Nov 28 06:38:17 crc kubenswrapper[4955]: I1128 06:38:17.457635 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-06a9-account-create-update-x2rxg" podStartSLOduration=1.457612479 podStartE2EDuration="1.457612479s" podCreationTimestamp="2025-11-28 06:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:17.448904441 +0000 UTC m=+1020.038160031" watchObservedRunningTime="2025-11-28 06:38:17.457612479 +0000 UTC m=+1020.046868069" Nov 28 06:38:18 crc kubenswrapper[4955]: I1128 06:38:18.448666 4955 generic.go:334] "Generic (PLEG): container finished" podID="a218d231-afdd-433f-9ce9-a8b50e3b3631" containerID="d22e93b6f0552960fcf92e5666ee86a336f2505165d71bfc002d1bf8c6dde707" exitCode=0 Nov 28 06:38:18 crc kubenswrapper[4955]: I1128 06:38:18.449098 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lc5bw" event={"ID":"a218d231-afdd-433f-9ce9-a8b50e3b3631","Type":"ContainerDied","Data":"d22e93b6f0552960fcf92e5666ee86a336f2505165d71bfc002d1bf8c6dde707"} Nov 28 06:38:18 crc kubenswrapper[4955]: I1128 06:38:18.453942 4955 generic.go:334] "Generic (PLEG): container finished" podID="7e3e480c-0b9f-4b17-904a-4fd047194f99" containerID="a87e624e918f15aed1e173f4e7cdaaead50289f2423da5c14e210fd29c53bf7e" exitCode=0 Nov 28 06:38:18 crc kubenswrapper[4955]: I1128 06:38:18.454052 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-x2rxg" event={"ID":"7e3e480c-0b9f-4b17-904a-4fd047194f99","Type":"ContainerDied","Data":"a87e624e918f15aed1e173f4e7cdaaead50289f2423da5c14e210fd29c53bf7e"} Nov 28 06:38:19 crc kubenswrapper[4955]: I1128 06:38:19.469028 4955 generic.go:334] "Generic (PLEG): container finished" podID="f8cdb34d-d310-43c4-bdcd-83e12752f6ea" containerID="e9de94a6758dae83a6c9a27c4469998cc078a76713a0bf147b3f04bb35731008" exitCode=0 Nov 28 06:38:19 crc kubenswrapper[4955]: I1128 06:38:19.469097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hf68n" event={"ID":"f8cdb34d-d310-43c4-bdcd-83e12752f6ea","Type":"ContainerDied","Data":"e9de94a6758dae83a6c9a27c4469998cc078a76713a0bf147b3f04bb35731008"} Nov 28 06:38:19 crc kubenswrapper[4955]: I1128 06:38:19.957134 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:19 crc kubenswrapper[4955]: I1128 06:38:19.961892 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.039255 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v48x\" (UniqueName: \"kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x\") pod \"a218d231-afdd-433f-9ce9-a8b50e3b3631\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.039308 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts\") pod \"7e3e480c-0b9f-4b17-904a-4fd047194f99\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.039358 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lch4c\" (UniqueName: \"kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c\") pod \"7e3e480c-0b9f-4b17-904a-4fd047194f99\" (UID: \"7e3e480c-0b9f-4b17-904a-4fd047194f99\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.039378 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts\") pod \"a218d231-afdd-433f-9ce9-a8b50e3b3631\" (UID: \"a218d231-afdd-433f-9ce9-a8b50e3b3631\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.040228 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a218d231-afdd-433f-9ce9-a8b50e3b3631" (UID: "a218d231-afdd-433f-9ce9-a8b50e3b3631"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.040318 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e3e480c-0b9f-4b17-904a-4fd047194f99" (UID: "7e3e480c-0b9f-4b17-904a-4fd047194f99"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.046681 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c" (OuterVolumeSpecName: "kube-api-access-lch4c") pod "7e3e480c-0b9f-4b17-904a-4fd047194f99" (UID: "7e3e480c-0b9f-4b17-904a-4fd047194f99"). InnerVolumeSpecName "kube-api-access-lch4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.049584 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x" (OuterVolumeSpecName: "kube-api-access-6v48x") pod "a218d231-afdd-433f-9ce9-a8b50e3b3631" (UID: "a218d231-afdd-433f-9ce9-a8b50e3b3631"). InnerVolumeSpecName "kube-api-access-6v48x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.141082 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.141307 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lch4c\" (UniqueName: \"kubernetes.io/projected/7e3e480c-0b9f-4b17-904a-4fd047194f99-kube-api-access-lch4c\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.141338 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a218d231-afdd-433f-9ce9-a8b50e3b3631-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.141360 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v48x\" (UniqueName: \"kubernetes.io/projected/a218d231-afdd-433f-9ce9-a8b50e3b3631-kube-api-access-6v48x\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.141378 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e3e480c-0b9f-4b17-904a-4fd047194f99-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.146878 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0b38ef12-050e-4f3e-9b92-79ad3baba7d7-etc-swift\") pod \"swift-storage-0\" (UID: \"0b38ef12-050e-4f3e-9b92-79ad3baba7d7\") " pod="openstack/swift-storage-0" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.314182 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.510371 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lc5bw" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.510454 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lc5bw" event={"ID":"a218d231-afdd-433f-9ce9-a8b50e3b3631","Type":"ContainerDied","Data":"7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2"} Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.510879 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7615e92b3f247aecf16aaf11a17d45a9dfad92639bcb3989603e3926c48bf7a2" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.517040 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-06a9-account-create-update-x2rxg" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.517692 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-06a9-account-create-update-x2rxg" event={"ID":"7e3e480c-0b9f-4b17-904a-4fd047194f99","Type":"ContainerDied","Data":"ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd"} Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.517768 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf6f913910d63234047cbb0fc49c5a34c08626f742a7e716fcef0954ffe42fd" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.799090 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854299 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854533 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8qsf\" (UniqueName: \"kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854680 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854916 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.854999 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.855146 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts\") pod \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\" (UID: \"f8cdb34d-d310-43c4-bdcd-83e12752f6ea\") " Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.856117 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.856242 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.862146 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf" (OuterVolumeSpecName: "kube-api-access-v8qsf") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "kube-api-access-v8qsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.867122 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.895248 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.898714 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.901532 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts" (OuterVolumeSpecName: "scripts") pod "f8cdb34d-d310-43c4-bdcd-83e12752f6ea" (UID: "f8cdb34d-d310-43c4-bdcd-83e12752f6ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957389 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957665 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957746 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8qsf\" (UniqueName: \"kubernetes.io/projected/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-kube-api-access-v8qsf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957832 4955 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957907 4955 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.957974 4955 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.958040 4955 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f8cdb34d-d310-43c4-bdcd-83e12752f6ea-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:20 crc kubenswrapper[4955]: I1128 06:38:20.960745 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 06:38:20 crc kubenswrapper[4955]: W1128 06:38:20.971096 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b38ef12_050e_4f3e_9b92_79ad3baba7d7.slice/crio-fa6d9790eb70a73646835d3cac61327547f2967ad1b72870c99e9aa8722aa9ed WatchSource:0}: Error finding container fa6d9790eb70a73646835d3cac61327547f2967ad1b72870c99e9aa8722aa9ed: Status 404 returned error can't find the container with id fa6d9790eb70a73646835d3cac61327547f2967ad1b72870c99e9aa8722aa9ed Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.464476 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-t5x9j"] Nov 28 06:38:21 crc kubenswrapper[4955]: E1128 06:38:21.464978 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a218d231-afdd-433f-9ce9-a8b50e3b3631" containerName="mariadb-database-create" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465013 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a218d231-afdd-433f-9ce9-a8b50e3b3631" containerName="mariadb-database-create" Nov 28 06:38:21 crc kubenswrapper[4955]: E1128 06:38:21.465033 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8cdb34d-d310-43c4-bdcd-83e12752f6ea" containerName="swift-ring-rebalance" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465046 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8cdb34d-d310-43c4-bdcd-83e12752f6ea" containerName="swift-ring-rebalance" Nov 28 06:38:21 crc kubenswrapper[4955]: E1128 06:38:21.465060 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3e480c-0b9f-4b17-904a-4fd047194f99" containerName="mariadb-account-create-update" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465070 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3e480c-0b9f-4b17-904a-4fd047194f99" containerName="mariadb-account-create-update" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465276 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8cdb34d-d310-43c4-bdcd-83e12752f6ea" containerName="swift-ring-rebalance" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465295 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3e480c-0b9f-4b17-904a-4fd047194f99" containerName="mariadb-account-create-update" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465317 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a218d231-afdd-433f-9ce9-a8b50e3b3631" containerName="mariadb-database-create" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.465938 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.468992 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.469029 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5v7xh" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.489042 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t5x9j"] Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.531564 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-hf68n" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.533187 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-hf68n" event={"ID":"f8cdb34d-d310-43c4-bdcd-83e12752f6ea","Type":"ContainerDied","Data":"ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070"} Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.533231 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3e2fa9f10d5ab32daa317714a0beeeef8e352c2c5a0c67e581c4f1daeeb070" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.534989 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"fa6d9790eb70a73646835d3cac61327547f2967ad1b72870c99e9aa8722aa9ed"} Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.564179 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.564253 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrznz\" (UniqueName: \"kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.564286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.564316 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.666156 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.666305 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.666371 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrznz\" (UniqueName: \"kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.666424 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.672325 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.672414 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.674099 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.685740 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrznz\" (UniqueName: \"kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz\") pod \"glance-db-sync-t5x9j\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:21 crc kubenswrapper[4955]: I1128 06:38:21.786822 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.344714 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-t5x9j"] Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.401751 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-p2bvh" podUID="3963971f-dccf-42a8-9889-b5e122ee6809" containerName="ovn-controller" probeResult="failure" output=< Nov 28 06:38:22 crc kubenswrapper[4955]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 06:38:22 crc kubenswrapper[4955]: > Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.500083 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.506092 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-nxwbb" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.551919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"cf43f86b971bf03c4ec5347cc765b5897d631b1a5128ac0f4656375a69286992"} Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.553802 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t5x9j" event={"ID":"bf7bffc4-2591-486f-ac91-07aa7b2e8c30","Type":"ContainerStarted","Data":"1f23c171df577b355c4bbb5a7dcd29ceb33f17fbc91a4c64cd0b53978c6b8183"} Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.762858 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-p2bvh-config-x2qd5"] Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.764017 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.768053 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786431 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786512 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786624 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786726 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786798 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.786859 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.796654 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p2bvh-config-x2qd5"] Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887555 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887641 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887690 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887725 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887755 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.887780 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.888062 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.888713 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.888712 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.889046 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.890338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:22 crc kubenswrapper[4955]: I1128 06:38:22.911768 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l\") pod \"ovn-controller-p2bvh-config-x2qd5\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.202713 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.562138 4955 generic.go:334] "Generic (PLEG): container finished" podID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerID="237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4" exitCode=0 Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.562237 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerDied","Data":"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4"} Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.565580 4955 generic.go:334] "Generic (PLEG): container finished" podID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerID="d87219f0bc006bb8d8315faf869bcf63563d1364fc23cc98cb44a72498571ff2" exitCode=0 Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.565645 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerDied","Data":"d87219f0bc006bb8d8315faf869bcf63563d1364fc23cc98cb44a72498571ff2"} Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.569933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"11c06e8b6fecd8169f1547ccb9c88d984272652a769015af067c590b718d2319"} Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.569968 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"03c94ed36c5165846bd5ed899af39bb5178f75f7a0a8e55e0f0e3da9b145fc20"} Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.569976 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"8d3e88475ff7b66ad6ddd3725ffecd292003519d102966afa04b1fe51eb28029"} Nov 28 06:38:23 crc kubenswrapper[4955]: I1128 06:38:23.647434 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-p2bvh-config-x2qd5"] Nov 28 06:38:23 crc kubenswrapper[4955]: W1128 06:38:23.653423 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739458de_7ea6_4ae8_8eb4_aeeb3610e3eb.slice/crio-533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7 WatchSource:0}: Error finding container 533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7: Status 404 returned error can't find the container with id 533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7 Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.586986 4955 generic.go:334] "Generic (PLEG): container finished" podID="739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" containerID="8101e5ada4f68fbbea08c2b9de0f3e1037e29d658078b4e9a60ac3ec8a3eb327" exitCode=0 Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.587046 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p2bvh-config-x2qd5" event={"ID":"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb","Type":"ContainerDied","Data":"8101e5ada4f68fbbea08c2b9de0f3e1037e29d658078b4e9a60ac3ec8a3eb327"} Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.588045 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p2bvh-config-x2qd5" event={"ID":"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb","Type":"ContainerStarted","Data":"533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7"} Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.591731 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerStarted","Data":"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a"} Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.592073 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.596710 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerStarted","Data":"ea41fb5c23be427cafc3835ec29e1540540c8dc4595aea2006d9f2a5e0cea344"} Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.596976 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.654751 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.93744734 podStartE2EDuration="58.654683026s" podCreationTimestamp="2025-11-28 06:37:26 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.104177354 +0000 UTC m=+983.693432924" lastFinishedPulling="2025-11-28 06:37:48.82141304 +0000 UTC m=+991.410668610" observedRunningTime="2025-11-28 06:38:24.631447215 +0000 UTC m=+1027.220702815" watchObservedRunningTime="2025-11-28 06:38:24.654683026 +0000 UTC m=+1027.243938606" Nov 28 06:38:24 crc kubenswrapper[4955]: I1128 06:38:24.664146 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.12321902 podStartE2EDuration="58.664112805s" podCreationTimestamp="2025-11-28 06:37:26 +0000 UTC" firstStartedPulling="2025-11-28 06:37:41.535242947 +0000 UTC m=+984.124498517" lastFinishedPulling="2025-11-28 06:37:49.076136702 +0000 UTC m=+991.665392302" observedRunningTime="2025-11-28 06:38:24.656371164 +0000 UTC m=+1027.245626744" watchObservedRunningTime="2025-11-28 06:38:24.664112805 +0000 UTC m=+1027.253368385" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.609852 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"5f6da0d187f578ca1cdfd8f4130a2497b357604b441609fc5ef446be3f7fb992"} Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.610213 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"c74c4f975a32019f98b4634d4424c7f3a97a00cc0ddd513015f79cafac718130"} Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.610226 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"7a553ea9d5de279529d2860e2f6a50007e22b6ba92ad8d92aed07c957d9653ad"} Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.901023 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950213 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950362 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run" (OuterVolumeSpecName: "var-run") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950434 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950466 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950535 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950806 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950846 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950879 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts\") pod \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\" (UID: \"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb\") " Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.950994 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.951478 4955 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.951515 4955 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.951525 4955 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.951960 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.953000 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts" (OuterVolumeSpecName: "scripts") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:25 crc kubenswrapper[4955]: I1128 06:38:25.957764 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l" (OuterVolumeSpecName: "kube-api-access-d5l2l") pod "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" (UID: "739458de-7ea6-4ae8-8eb4-aeeb3610e3eb"). InnerVolumeSpecName "kube-api-access-d5l2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.053401 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5l2l\" (UniqueName: \"kubernetes.io/projected/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-kube-api-access-d5l2l\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.053436 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.053445 4955 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.618874 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-p2bvh-config-x2qd5" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.618883 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-p2bvh-config-x2qd5" event={"ID":"739458de-7ea6-4ae8-8eb4-aeeb3610e3eb","Type":"ContainerDied","Data":"533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7"} Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.619027 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="533dc83b63f81f2bceb1e64516121df2d009e9049706c0b484b67ae3c933aae7" Nov 28 06:38:26 crc kubenswrapper[4955]: I1128 06:38:26.623178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"9f91ed112daf8a0c87268d134500e2efa8c65a638208c9928e2e19fa335c5526"} Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.019600 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-p2bvh-config-x2qd5"] Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.028438 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-p2bvh-config-x2qd5"] Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.390860 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-p2bvh" Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.654249 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"9107db8b65bf4b5967b78394ef4762c08aa33cb94dc218a3288cf1800c771689"} Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.655644 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"b2955318753f1dbfe7ee3df5c02c03066a05bbfa0adbb68db2d35205b086e6e1"} Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.655675 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"12f6ae090a97688466d0e3b33bdb48ad3b43ae04c03d110904c86f8b38195319"} Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.655684 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"02f3f6f1629a368cac7e28d1e361d41552985a84bf8850e203ed81b1c72eeb0c"} Nov 28 06:38:27 crc kubenswrapper[4955]: I1128 06:38:27.718494 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" path="/var/lib/kubelet/pods/739458de-7ea6-4ae8-8eb4-aeeb3610e3eb/volumes" Nov 28 06:38:28 crc kubenswrapper[4955]: I1128 06:38:28.668769 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"07c129de84a26160da2375d4e505b8e88fd96bd06a013e9f96558bd476351003"} Nov 28 06:38:34 crc kubenswrapper[4955]: I1128 06:38:34.728572 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"4b340d90265fb44071b7829da8eaf2233217c0d3b55f34efa9d06baff4b802e9"} Nov 28 06:38:34 crc kubenswrapper[4955]: I1128 06:38:34.729193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0b38ef12-050e-4f3e-9b92-79ad3baba7d7","Type":"ContainerStarted","Data":"9235dc70e0845a39abb188c2fbcfa4a46501dd0e7f19cee414be40e423857a80"} Nov 28 06:38:34 crc kubenswrapper[4955]: I1128 06:38:34.782926 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=26.038485811 podStartE2EDuration="31.782900812s" podCreationTimestamp="2025-11-28 06:38:03 +0000 UTC" firstStartedPulling="2025-11-28 06:38:20.973039741 +0000 UTC m=+1023.562295301" lastFinishedPulling="2025-11-28 06:38:26.717454732 +0000 UTC m=+1029.306710302" observedRunningTime="2025-11-28 06:38:34.772572158 +0000 UTC m=+1037.361827738" watchObservedRunningTime="2025-11-28 06:38:34.782900812 +0000 UTC m=+1037.372156422" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.032016 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:38:35 crc kubenswrapper[4955]: E1128 06:38:35.032297 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" containerName="ovn-config" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.032317 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" containerName="ovn-config" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.032525 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="739458de-7ea6-4ae8-8eb4-aeeb3610e3eb" containerName="ovn-config" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.033362 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.035411 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.083605 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214139 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214190 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214226 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214267 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214298 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6pmm\" (UniqueName: \"kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.214315 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.315772 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.316040 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6pmm\" (UniqueName: \"kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.316141 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.316279 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.316402 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.316604 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.317348 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.317379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.317455 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.317586 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.317907 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.338163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6pmm\" (UniqueName: \"kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm\") pod \"dnsmasq-dns-77585f5f8c-dwjhz\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.352033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.615012 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:38:35 crc kubenswrapper[4955]: W1128 06:38:35.620197 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7418d4f_d4f8_4e84_a59c_2f7025a856b3.slice/crio-9eae7845ed4913899b66cfe900fe60c0a4c1178deb24a8c1a2c6153a182a3a93 WatchSource:0}: Error finding container 9eae7845ed4913899b66cfe900fe60c0a4c1178deb24a8c1a2c6153a182a3a93: Status 404 returned error can't find the container with id 9eae7845ed4913899b66cfe900fe60c0a4c1178deb24a8c1a2c6153a182a3a93 Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.745458 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" event={"ID":"a7418d4f-d4f8-4e84-a59c-2f7025a856b3","Type":"ContainerStarted","Data":"9eae7845ed4913899b66cfe900fe60c0a4c1178deb24a8c1a2c6153a182a3a93"} Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.749471 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t5x9j" event={"ID":"bf7bffc4-2591-486f-ac91-07aa7b2e8c30","Type":"ContainerStarted","Data":"1fc8347d67bcaf4e2aebeacf77f556747887a3ae28a2cb63eee041abda3093fc"} Nov 28 06:38:35 crc kubenswrapper[4955]: I1128 06:38:35.771859 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-t5x9j" podStartSLOduration=2.783454773 podStartE2EDuration="14.771841517s" podCreationTimestamp="2025-11-28 06:38:21 +0000 UTC" firstStartedPulling="2025-11-28 06:38:22.355233441 +0000 UTC m=+1024.944489011" lastFinishedPulling="2025-11-28 06:38:34.343620175 +0000 UTC m=+1036.932875755" observedRunningTime="2025-11-28 06:38:35.770649773 +0000 UTC m=+1038.359905343" watchObservedRunningTime="2025-11-28 06:38:35.771841517 +0000 UTC m=+1038.361097097" Nov 28 06:38:36 crc kubenswrapper[4955]: I1128 06:38:36.767343 4955 generic.go:334] "Generic (PLEG): container finished" podID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerID="5b0de1c1b2164247f1ee0ca3fe5b4a0b62b5ed56e3dcd2394c564b503a25d22a" exitCode=0 Nov 28 06:38:36 crc kubenswrapper[4955]: I1128 06:38:36.767484 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" event={"ID":"a7418d4f-d4f8-4e84-a59c-2f7025a856b3","Type":"ContainerDied","Data":"5b0de1c1b2164247f1ee0ca3fe5b4a0b62b5ed56e3dcd2394c564b503a25d22a"} Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.602751 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.786822 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" event={"ID":"a7418d4f-d4f8-4e84-a59c-2f7025a856b3","Type":"ContainerStarted","Data":"1c34a246ccc9c4e2ba00f6bf4c81ffba04429c3d33460b41b640aff44468e9cc"} Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.788734 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.823287 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podStartSLOduration=2.823267819 podStartE2EDuration="2.823267819s" podCreationTimestamp="2025-11-28 06:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:37.81872074 +0000 UTC m=+1040.407976310" watchObservedRunningTime="2025-11-28 06:38:37.823267819 +0000 UTC m=+1040.412523389" Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.898736 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.936521 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7wwdm"] Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.937519 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:37 crc kubenswrapper[4955]: I1128 06:38:37.967250 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7wwdm"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.030682 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-8z9v4"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.031877 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.042872 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-46f3-account-create-update-b4qrc"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.043948 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.052946 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.053219 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8z9v4"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.063053 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.063131 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml8bf\" (UniqueName: \"kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.069338 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46f3-account-create-update-b4qrc"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.133853 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9451-account-create-update-5f989"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.134862 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.137133 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.149998 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9451-account-create-update-5f989"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164444 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164495 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml8bf\" (UniqueName: \"kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164550 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164642 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kql97\" (UniqueName: \"kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.164685 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj8bl\" (UniqueName: \"kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.165224 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.191196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml8bf\" (UniqueName: \"kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf\") pod \"cinder-db-create-7wwdm\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.237571 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-l9zcs"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.238989 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.241894 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.243369 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt79j" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.243564 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.250420 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.251726 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l9zcs"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.252706 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.266968 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267037 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xqk\" (UniqueName: \"kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267068 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267085 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kql97\" (UniqueName: \"kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267107 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj8bl\" (UniqueName: \"kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267127 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.267835 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.268407 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.290073 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kql97\" (UniqueName: \"kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97\") pod \"barbican-db-create-8z9v4\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.293068 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj8bl\" (UniqueName: \"kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl\") pod \"cinder-46f3-account-create-update-b4qrc\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.352880 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.358800 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.368894 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.369017 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wd79\" (UniqueName: \"kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.369124 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.369324 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.369406 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2xqk\" (UniqueName: \"kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.371078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.396793 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2xqk\" (UniqueName: \"kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk\") pod \"barbican-9451-account-create-update-5f989\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.449965 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.471179 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wd79\" (UniqueName: \"kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.483787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.483908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.497657 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tsnf2"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.499071 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wd79\" (UniqueName: \"kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.499147 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.503057 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.505158 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle\") pod \"keystone-db-sync-l9zcs\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.513726 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e755-account-create-update-k8drh"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.514845 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.517741 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.523518 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tsnf2"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.531202 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e755-account-create-update-k8drh"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.563257 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.585941 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.586189 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk22g\" (UniqueName: \"kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.688579 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc7kj\" (UniqueName: \"kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.688939 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.688970 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.689015 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk22g\" (UniqueName: \"kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.689847 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.711964 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk22g\" (UniqueName: \"kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g\") pod \"neutron-db-create-tsnf2\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.790739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc7kj\" (UniqueName: \"kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.790842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.794227 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.812217 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7wwdm"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.816569 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc7kj\" (UniqueName: \"kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj\") pod \"neutron-e755-account-create-update-k8drh\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.826617 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.840920 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.847215 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9451-account-create-update-5f989"] Nov 28 06:38:38 crc kubenswrapper[4955]: I1128 06:38:38.967437 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-8z9v4"] Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.121760 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l9zcs"] Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.136186 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-46f3-account-create-update-b4qrc"] Nov 28 06:38:39 crc kubenswrapper[4955]: W1128 06:38:39.160274 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6185b77e_1d4a_4e4c_9bca_f322a2339ee0.slice/crio-9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde WatchSource:0}: Error finding container 9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde: Status 404 returned error can't find the container with id 9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.370100 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e755-account-create-update-k8drh"] Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.418730 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tsnf2"] Nov 28 06:38:39 crc kubenswrapper[4955]: W1128 06:38:39.421232 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eaf9231_c7bd_4a41_b9b9_2370274a779b.slice/crio-72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a WatchSource:0}: Error finding container 72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a: Status 404 returned error can't find the container with id 72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a Nov 28 06:38:39 crc kubenswrapper[4955]: W1128 06:38:39.425409 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf65ca1af_dba4_4c0d_80ba_31b1f15957c3.slice/crio-509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108 WatchSource:0}: Error finding container 509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108: Status 404 returned error can't find the container with id 509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108 Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.810177 4955 generic.go:334] "Generic (PLEG): container finished" podID="e6a74ed3-3caa-473a-8397-88c67b97775f" containerID="f6d74f5b7d968955626d79434fe8f998000e145da80ab1b7ddb59eeec178b4e2" exitCode=0 Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.810347 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8z9v4" event={"ID":"e6a74ed3-3caa-473a-8397-88c67b97775f","Type":"ContainerDied","Data":"f6d74f5b7d968955626d79434fe8f998000e145da80ab1b7ddb59eeec178b4e2"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.810401 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8z9v4" event={"ID":"e6a74ed3-3caa-473a-8397-88c67b97775f","Type":"ContainerStarted","Data":"9246de4368d8ca8365870c39b405ec10de955ac2caa2cf66213ed1ddf4a6328c"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.813918 4955 generic.go:334] "Generic (PLEG): container finished" podID="6ebf5806-8556-44f9-8aaa-6dc42411d41a" containerID="1524f45a8f8cec86c4798ab5c65531d521641dc0d5e6a475af90504c31db328f" exitCode=0 Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.814021 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7wwdm" event={"ID":"6ebf5806-8556-44f9-8aaa-6dc42411d41a","Type":"ContainerDied","Data":"1524f45a8f8cec86c4798ab5c65531d521641dc0d5e6a475af90504c31db328f"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.814056 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7wwdm" event={"ID":"6ebf5806-8556-44f9-8aaa-6dc42411d41a","Type":"ContainerStarted","Data":"94aa306edd6bee7e79eaa565cc74f59a48e36617fb775bb72ee1ff485edd803b"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.815807 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l9zcs" event={"ID":"e97d232b-3a4f-4080-9943-b9e2c61b3d44","Type":"ContainerStarted","Data":"33732a6b2dc8fb2a388e700a396e394c60fe6fd1c7418a2c249df2670b6401dd"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.817706 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e755-account-create-update-k8drh" event={"ID":"6eaf9231-c7bd-4a41-b9b9-2370274a779b","Type":"ContainerStarted","Data":"72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.818962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46f3-account-create-update-b4qrc" event={"ID":"6185b77e-1d4a-4e4c-9bca-f322a2339ee0","Type":"ContainerStarted","Data":"9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.820756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tsnf2" event={"ID":"f65ca1af-dba4-4c0d-80ba-31b1f15957c3","Type":"ContainerStarted","Data":"509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.824696 4955 generic.go:334] "Generic (PLEG): container finished" podID="ce267f10-88a5-4963-82f3-2bf40a69d1f5" containerID="4a4c047de2c383fafd3374e492eb19bcbaf1436edf5fef0b5a7caf0df50c0d48" exitCode=0 Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.824756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9451-account-create-update-5f989" event={"ID":"ce267f10-88a5-4963-82f3-2bf40a69d1f5","Type":"ContainerDied","Data":"4a4c047de2c383fafd3374e492eb19bcbaf1436edf5fef0b5a7caf0df50c0d48"} Nov 28 06:38:39 crc kubenswrapper[4955]: I1128 06:38:39.824805 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9451-account-create-update-5f989" event={"ID":"ce267f10-88a5-4963-82f3-2bf40a69d1f5","Type":"ContainerStarted","Data":"e542aacedf63c7180a99cb5efbdf374e48e8f19ae52e88b3b24a83077c5d437e"} Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.124115 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.262170 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml8bf\" (UniqueName: \"kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf\") pod \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.262216 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts\") pod \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\" (UID: \"6ebf5806-8556-44f9-8aaa-6dc42411d41a\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.263213 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ebf5806-8556-44f9-8aaa-6dc42411d41a" (UID: "6ebf5806-8556-44f9-8aaa-6dc42411d41a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.268921 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf" (OuterVolumeSpecName: "kube-api-access-ml8bf") pod "6ebf5806-8556-44f9-8aaa-6dc42411d41a" (UID: "6ebf5806-8556-44f9-8aaa-6dc42411d41a"). InnerVolumeSpecName "kube-api-access-ml8bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.275850 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.323099 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.364027 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml8bf\" (UniqueName: \"kubernetes.io/projected/6ebf5806-8556-44f9-8aaa-6dc42411d41a-kube-api-access-ml8bf\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.364061 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ebf5806-8556-44f9-8aaa-6dc42411d41a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.464713 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kql97\" (UniqueName: \"kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97\") pod \"e6a74ed3-3caa-473a-8397-88c67b97775f\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.464914 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2xqk\" (UniqueName: \"kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk\") pod \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.464947 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts\") pod \"e6a74ed3-3caa-473a-8397-88c67b97775f\" (UID: \"e6a74ed3-3caa-473a-8397-88c67b97775f\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.464987 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts\") pod \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\" (UID: \"ce267f10-88a5-4963-82f3-2bf40a69d1f5\") " Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.465449 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6a74ed3-3caa-473a-8397-88c67b97775f" (UID: "e6a74ed3-3caa-473a-8397-88c67b97775f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.465676 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce267f10-88a5-4963-82f3-2bf40a69d1f5" (UID: "ce267f10-88a5-4963-82f3-2bf40a69d1f5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.476343 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk" (OuterVolumeSpecName: "kube-api-access-v2xqk") pod "ce267f10-88a5-4963-82f3-2bf40a69d1f5" (UID: "ce267f10-88a5-4963-82f3-2bf40a69d1f5"). InnerVolumeSpecName "kube-api-access-v2xqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.476402 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97" (OuterVolumeSpecName: "kube-api-access-kql97") pod "e6a74ed3-3caa-473a-8397-88c67b97775f" (UID: "e6a74ed3-3caa-473a-8397-88c67b97775f"). InnerVolumeSpecName "kube-api-access-kql97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.566485 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2xqk\" (UniqueName: \"kubernetes.io/projected/ce267f10-88a5-4963-82f3-2bf40a69d1f5-kube-api-access-v2xqk\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.566538 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6a74ed3-3caa-473a-8397-88c67b97775f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.566551 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce267f10-88a5-4963-82f3-2bf40a69d1f5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.566561 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kql97\" (UniqueName: \"kubernetes.io/projected/e6a74ed3-3caa-473a-8397-88c67b97775f-kube-api-access-kql97\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.841914 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-8z9v4" event={"ID":"e6a74ed3-3caa-473a-8397-88c67b97775f","Type":"ContainerDied","Data":"9246de4368d8ca8365870c39b405ec10de955ac2caa2cf66213ed1ddf4a6328c"} Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.842252 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9246de4368d8ca8365870c39b405ec10de955ac2caa2cf66213ed1ddf4a6328c" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.842103 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-8z9v4" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.843931 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7wwdm" event={"ID":"6ebf5806-8556-44f9-8aaa-6dc42411d41a","Type":"ContainerDied","Data":"94aa306edd6bee7e79eaa565cc74f59a48e36617fb775bb72ee1ff485edd803b"} Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.843981 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7wwdm" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.843988 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94aa306edd6bee7e79eaa565cc74f59a48e36617fb775bb72ee1ff485edd803b" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.847486 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9451-account-create-update-5f989" event={"ID":"ce267f10-88a5-4963-82f3-2bf40a69d1f5","Type":"ContainerDied","Data":"e542aacedf63c7180a99cb5efbdf374e48e8f19ae52e88b3b24a83077c5d437e"} Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.847520 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e542aacedf63c7180a99cb5efbdf374e48e8f19ae52e88b3b24a83077c5d437e" Nov 28 06:38:41 crc kubenswrapper[4955]: I1128 06:38:41.847554 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9451-account-create-update-5f989" Nov 28 06:38:43 crc kubenswrapper[4955]: I1128 06:38:43.862633 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tsnf2" event={"ID":"f65ca1af-dba4-4c0d-80ba-31b1f15957c3","Type":"ContainerStarted","Data":"36080a6c2fd791b619a467a004bacf0992e25930205756bd4f544b4c2086a005"} Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.870784 4955 generic.go:334] "Generic (PLEG): container finished" podID="6185b77e-1d4a-4e4c-9bca-f322a2339ee0" containerID="549c6ebfc94867f528040dfcd453e180e98d408d9742a005d1a5aff353aa33fd" exitCode=0 Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.870869 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46f3-account-create-update-b4qrc" event={"ID":"6185b77e-1d4a-4e4c-9bca-f322a2339ee0","Type":"ContainerDied","Data":"549c6ebfc94867f528040dfcd453e180e98d408d9742a005d1a5aff353aa33fd"} Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.874776 4955 generic.go:334] "Generic (PLEG): container finished" podID="f65ca1af-dba4-4c0d-80ba-31b1f15957c3" containerID="36080a6c2fd791b619a467a004bacf0992e25930205756bd4f544b4c2086a005" exitCode=0 Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.874853 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tsnf2" event={"ID":"f65ca1af-dba4-4c0d-80ba-31b1f15957c3","Type":"ContainerDied","Data":"36080a6c2fd791b619a467a004bacf0992e25930205756bd4f544b4c2086a005"} Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.876747 4955 generic.go:334] "Generic (PLEG): container finished" podID="6eaf9231-c7bd-4a41-b9b9-2370274a779b" containerID="a683b62083abcb07c2be99f2b03718e6acb73ab2419a45583890e573ff345358" exitCode=0 Nov 28 06:38:44 crc kubenswrapper[4955]: I1128 06:38:44.876778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e755-account-create-update-k8drh" event={"ID":"6eaf9231-c7bd-4a41-b9b9-2370274a779b","Type":"ContainerDied","Data":"a683b62083abcb07c2be99f2b03718e6acb73ab2419a45583890e573ff345358"} Nov 28 06:38:45 crc kubenswrapper[4955]: I1128 06:38:45.353665 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:38:45 crc kubenswrapper[4955]: I1128 06:38:45.417378 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:45 crc kubenswrapper[4955]: I1128 06:38:45.417640 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-xjtfj" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="dnsmasq-dns" containerID="cri-o://42ae50065f84da9a1d85516a7f903e9828f8fb27c281d93ba4542e620505a0a9" gracePeriod=10 Nov 28 06:38:45 crc kubenswrapper[4955]: I1128 06:38:45.891193 4955 generic.go:334] "Generic (PLEG): container finished" podID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerID="42ae50065f84da9a1d85516a7f903e9828f8fb27c281d93ba4542e620505a0a9" exitCode=0 Nov 28 06:38:45 crc kubenswrapper[4955]: I1128 06:38:45.891740 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xjtfj" event={"ID":"8c14d08e-06cc-409c-84d7-dad9fcfc4835","Type":"ContainerDied","Data":"42ae50065f84da9a1d85516a7f903e9828f8fb27c281d93ba4542e620505a0a9"} Nov 28 06:38:46 crc kubenswrapper[4955]: I1128 06:38:46.903102 4955 generic.go:334] "Generic (PLEG): container finished" podID="bf7bffc4-2591-486f-ac91-07aa7b2e8c30" containerID="1fc8347d67bcaf4e2aebeacf77f556747887a3ae28a2cb63eee041abda3093fc" exitCode=0 Nov 28 06:38:46 crc kubenswrapper[4955]: I1128 06:38:46.903225 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t5x9j" event={"ID":"bf7bffc4-2591-486f-ac91-07aa7b2e8c30","Type":"ContainerDied","Data":"1fc8347d67bcaf4e2aebeacf77f556747887a3ae28a2cb63eee041abda3093fc"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.508228 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.514895 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.563202 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.569839 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.572782 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592222 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk22g\" (UniqueName: \"kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g\") pod \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592280 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc\") pod \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592323 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrznz\" (UniqueName: \"kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz\") pod \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592361 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data\") pod \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592395 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle\") pod \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592420 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config\") pod \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts\") pod \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592466 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kf69\" (UniqueName: \"kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69\") pod \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592518 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data\") pod \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\" (UID: \"bf7bffc4-2591-486f-ac91-07aa7b2e8c30\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592557 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts\") pod \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.592583 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc7kj\" (UniqueName: \"kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj\") pod \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\" (UID: \"6eaf9231-c7bd-4a41-b9b9-2370274a779b\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.593424 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts\") pod \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\" (UID: \"f65ca1af-dba4-4c0d-80ba-31b1f15957c3\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.593498 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb\") pod \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.593568 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj8bl\" (UniqueName: \"kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl\") pod \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\" (UID: \"6185b77e-1d4a-4e4c-9bca-f322a2339ee0\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.593598 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb\") pod \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\" (UID: \"8c14d08e-06cc-409c-84d7-dad9fcfc4835\") " Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.594627 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6185b77e-1d4a-4e4c-9bca-f322a2339ee0" (UID: "6185b77e-1d4a-4e4c-9bca-f322a2339ee0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.598341 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.601448 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f65ca1af-dba4-4c0d-80ba-31b1f15957c3" (UID: "f65ca1af-dba4-4c0d-80ba-31b1f15957c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.603258 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bf7bffc4-2591-486f-ac91-07aa7b2e8c30" (UID: "bf7bffc4-2591-486f-ac91-07aa7b2e8c30"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.606231 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6eaf9231-c7bd-4a41-b9b9-2370274a779b" (UID: "6eaf9231-c7bd-4a41-b9b9-2370274a779b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.606327 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj" (OuterVolumeSpecName: "kube-api-access-pc7kj") pod "6eaf9231-c7bd-4a41-b9b9-2370274a779b" (UID: "6eaf9231-c7bd-4a41-b9b9-2370274a779b"). InnerVolumeSpecName "kube-api-access-pc7kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.608430 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g" (OuterVolumeSpecName: "kube-api-access-zk22g") pod "f65ca1af-dba4-4c0d-80ba-31b1f15957c3" (UID: "f65ca1af-dba4-4c0d-80ba-31b1f15957c3"). InnerVolumeSpecName "kube-api-access-zk22g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.608453 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69" (OuterVolumeSpecName: "kube-api-access-2kf69") pod "8c14d08e-06cc-409c-84d7-dad9fcfc4835" (UID: "8c14d08e-06cc-409c-84d7-dad9fcfc4835"). InnerVolumeSpecName "kube-api-access-2kf69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.615061 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz" (OuterVolumeSpecName: "kube-api-access-wrznz") pod "bf7bffc4-2591-486f-ac91-07aa7b2e8c30" (UID: "bf7bffc4-2591-486f-ac91-07aa7b2e8c30"). InnerVolumeSpecName "kube-api-access-wrznz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.623190 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl" (OuterVolumeSpecName: "kube-api-access-dj8bl") pod "6185b77e-1d4a-4e4c-9bca-f322a2339ee0" (UID: "6185b77e-1d4a-4e4c-9bca-f322a2339ee0"). InnerVolumeSpecName "kube-api-access-dj8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.644811 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf7bffc4-2591-486f-ac91-07aa7b2e8c30" (UID: "bf7bffc4-2591-486f-ac91-07aa7b2e8c30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.647768 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c14d08e-06cc-409c-84d7-dad9fcfc4835" (UID: "8c14d08e-06cc-409c-84d7-dad9fcfc4835"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.666086 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c14d08e-06cc-409c-84d7-dad9fcfc4835" (UID: "8c14d08e-06cc-409c-84d7-dad9fcfc4835"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.671199 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config" (OuterVolumeSpecName: "config") pod "8c14d08e-06cc-409c-84d7-dad9fcfc4835" (UID: "8c14d08e-06cc-409c-84d7-dad9fcfc4835"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.672370 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c14d08e-06cc-409c-84d7-dad9fcfc4835" (UID: "8c14d08e-06cc-409c-84d7-dad9fcfc4835"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.672411 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data" (OuterVolumeSpecName: "config-data") pod "bf7bffc4-2591-486f-ac91-07aa7b2e8c30" (UID: "bf7bffc4-2591-486f-ac91-07aa7b2e8c30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.699017 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.699226 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.699923 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700007 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kf69\" (UniqueName: \"kubernetes.io/projected/8c14d08e-06cc-409c-84d7-dad9fcfc4835-kube-api-access-2kf69\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700069 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700123 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6eaf9231-c7bd-4a41-b9b9-2370274a779b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700192 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc7kj\" (UniqueName: \"kubernetes.io/projected/6eaf9231-c7bd-4a41-b9b9-2370274a779b-kube-api-access-pc7kj\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700272 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700344 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700412 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj8bl\" (UniqueName: \"kubernetes.io/projected/6185b77e-1d4a-4e4c-9bca-f322a2339ee0-kube-api-access-dj8bl\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700478 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700584 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk22g\" (UniqueName: \"kubernetes.io/projected/f65ca1af-dba4-4c0d-80ba-31b1f15957c3-kube-api-access-zk22g\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700711 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c14d08e-06cc-409c-84d7-dad9fcfc4835-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.700787 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrznz\" (UniqueName: \"kubernetes.io/projected/bf7bffc4-2591-486f-ac91-07aa7b2e8c30-kube-api-access-wrznz\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.928256 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-xjtfj" event={"ID":"8c14d08e-06cc-409c-84d7-dad9fcfc4835","Type":"ContainerDied","Data":"324f3bf11141822e8e8f986502a9bb10d48cd30b0038c696b3fb18d996fb3b22"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.928336 4955 scope.go:117] "RemoveContainer" containerID="42ae50065f84da9a1d85516a7f903e9828f8fb27c281d93ba4542e620505a0a9" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.928819 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-xjtfj" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.930911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-46f3-account-create-update-b4qrc" event={"ID":"6185b77e-1d4a-4e4c-9bca-f322a2339ee0","Type":"ContainerDied","Data":"9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.931055 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e34ca67af7fe263e41c3b7f2a32876f95dae2dc9e8bd498cf05069222e9acde" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.931327 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-46f3-account-create-update-b4qrc" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.936629 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tsnf2" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.936620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tsnf2" event={"ID":"f65ca1af-dba4-4c0d-80ba-31b1f15957c3","Type":"ContainerDied","Data":"509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.936728 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b190d004ba92f54b839e39ffc5d5f12a9f806ad9632e655a1f2e195b59108" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.949681 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l9zcs" event={"ID":"e97d232b-3a4f-4080-9943-b9e2c61b3d44","Type":"ContainerStarted","Data":"261b419b76679b0fbc2a00ddaae2de554595059d65f2af59a0e72e9ca8bab205"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.954553 4955 scope.go:117] "RemoveContainer" containerID="23c9d8946e10dd61c42eab2dc0d24cb635b854d9311849117480e4f89040d8d7" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.959928 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-t5x9j" event={"ID":"bf7bffc4-2591-486f-ac91-07aa7b2e8c30","Type":"ContainerDied","Data":"1f23c171df577b355c4bbb5a7dcd29ceb33f17fbc91a4c64cd0b53978c6b8183"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.960038 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f23c171df577b355c4bbb5a7dcd29ceb33f17fbc91a4c64cd0b53978c6b8183" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.959981 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-t5x9j" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.997578 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e755-account-create-update-k8drh" event={"ID":"6eaf9231-c7bd-4a41-b9b9-2370274a779b","Type":"ContainerDied","Data":"72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a"} Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.997614 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b5f629a3cafab2b9cc043d74e4e52fe6ecd3e8ecce0c23b225e2189eb1a64a" Nov 28 06:38:48 crc kubenswrapper[4955]: I1128 06:38:48.997669 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e755-account-create-update-k8drh" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.006907 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-l9zcs" podStartSLOduration=1.780040208 podStartE2EDuration="11.006887842s" podCreationTimestamp="2025-11-28 06:38:38 +0000 UTC" firstStartedPulling="2025-11-28 06:38:39.155263951 +0000 UTC m=+1041.744519521" lastFinishedPulling="2025-11-28 06:38:48.382111575 +0000 UTC m=+1050.971367155" observedRunningTime="2025-11-28 06:38:48.98471139 +0000 UTC m=+1051.573966981" watchObservedRunningTime="2025-11-28 06:38:49.006887842 +0000 UTC m=+1051.596143412" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.021488 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.028374 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-xjtfj"] Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.294464 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.295034 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="init" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.295148 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="init" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.295237 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="dnsmasq-dns" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.295305 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="dnsmasq-dns" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.295396 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce267f10-88a5-4963-82f3-2bf40a69d1f5" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.295475 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce267f10-88a5-4963-82f3-2bf40a69d1f5" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.295586 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebf5806-8556-44f9-8aaa-6dc42411d41a" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.296684 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebf5806-8556-44f9-8aaa-6dc42411d41a" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.296786 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a74ed3-3caa-473a-8397-88c67b97775f" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.296855 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a74ed3-3caa-473a-8397-88c67b97775f" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.297019 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65ca1af-dba4-4c0d-80ba-31b1f15957c3" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297094 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65ca1af-dba4-4c0d-80ba-31b1f15957c3" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.297179 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6185b77e-1d4a-4e4c-9bca-f322a2339ee0" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297253 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6185b77e-1d4a-4e4c-9bca-f322a2339ee0" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.297329 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf7bffc4-2591-486f-ac91-07aa7b2e8c30" containerName="glance-db-sync" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297403 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf7bffc4-2591-486f-ac91-07aa7b2e8c30" containerName="glance-db-sync" Nov 28 06:38:49 crc kubenswrapper[4955]: E1128 06:38:49.297483 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaf9231-c7bd-4a41-b9b9-2370274a779b" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297580 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaf9231-c7bd-4a41-b9b9-2370274a779b" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297882 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6185b77e-1d4a-4e4c-9bca-f322a2339ee0" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.297982 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf7bffc4-2591-486f-ac91-07aa7b2e8c30" containerName="glance-db-sync" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298062 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65ca1af-dba4-4c0d-80ba-31b1f15957c3" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298146 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eaf9231-c7bd-4a41-b9b9-2370274a779b" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298222 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="dnsmasq-dns" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298294 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebf5806-8556-44f9-8aaa-6dc42411d41a" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298378 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce267f10-88a5-4963-82f3-2bf40a69d1f5" containerName="mariadb-account-create-update" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.298462 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a74ed3-3caa-473a-8397-88c67b97775f" containerName="mariadb-database-create" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.299622 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312004 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312038 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312067 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312117 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312142 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.312196 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4284t\" (UniqueName: \"kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.324932 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413153 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413237 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413267 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413330 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4284t\" (UniqueName: \"kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413380 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.413406 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.414677 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.415292 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.415954 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.416681 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.417550 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.453721 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4284t\" (UniqueName: \"kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t\") pod \"dnsmasq-dns-7ff5475cc9-wqb85\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.623040 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:49 crc kubenswrapper[4955]: I1128 06:38:49.733110 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" path="/var/lib/kubelet/pods/8c14d08e-06cc-409c-84d7-dad9fcfc4835/volumes" Nov 28 06:38:50 crc kubenswrapper[4955]: I1128 06:38:50.067045 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:50 crc kubenswrapper[4955]: W1128 06:38:50.074135 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8475a0_d399_462c_a6bf_bb1c70950652.slice/crio-73b5d524bdcdbd347ac04e4c0d79b4071de521b2a4d1d3423e1227a7d0b1c45f WatchSource:0}: Error finding container 73b5d524bdcdbd347ac04e4c0d79b4071de521b2a4d1d3423e1227a7d0b1c45f: Status 404 returned error can't find the container with id 73b5d524bdcdbd347ac04e4c0d79b4071de521b2a4d1d3423e1227a7d0b1c45f Nov 28 06:38:51 crc kubenswrapper[4955]: I1128 06:38:51.017326 4955 generic.go:334] "Generic (PLEG): container finished" podID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerID="2dad466ed59a193308d44f154d7eab5702df649cae9b33614031e776562ef9ed" exitCode=0 Nov 28 06:38:51 crc kubenswrapper[4955]: I1128 06:38:51.017414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" event={"ID":"2d8475a0-d399-462c-a6bf-bb1c70950652","Type":"ContainerDied","Data":"2dad466ed59a193308d44f154d7eab5702df649cae9b33614031e776562ef9ed"} Nov 28 06:38:51 crc kubenswrapper[4955]: I1128 06:38:51.017752 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" event={"ID":"2d8475a0-d399-462c-a6bf-bb1c70950652","Type":"ContainerStarted","Data":"73b5d524bdcdbd347ac04e4c0d79b4071de521b2a4d1d3423e1227a7d0b1c45f"} Nov 28 06:38:52 crc kubenswrapper[4955]: I1128 06:38:52.028952 4955 generic.go:334] "Generic (PLEG): container finished" podID="e97d232b-3a4f-4080-9943-b9e2c61b3d44" containerID="261b419b76679b0fbc2a00ddaae2de554595059d65f2af59a0e72e9ca8bab205" exitCode=0 Nov 28 06:38:52 crc kubenswrapper[4955]: I1128 06:38:52.029078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l9zcs" event={"ID":"e97d232b-3a4f-4080-9943-b9e2c61b3d44","Type":"ContainerDied","Data":"261b419b76679b0fbc2a00ddaae2de554595059d65f2af59a0e72e9ca8bab205"} Nov 28 06:38:52 crc kubenswrapper[4955]: I1128 06:38:52.032303 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" event={"ID":"2d8475a0-d399-462c-a6bf-bb1c70950652","Type":"ContainerStarted","Data":"a55b300a65c2ecbc5b4d07f8e26a16fa17d1583d445cd91905a6a7586edf7706"} Nov 28 06:38:52 crc kubenswrapper[4955]: I1128 06:38:52.032523 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:52 crc kubenswrapper[4955]: I1128 06:38:52.084448 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" podStartSLOduration=3.084425078 podStartE2EDuration="3.084425078s" podCreationTimestamp="2025-11-28 06:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:52.078200121 +0000 UTC m=+1054.667455751" watchObservedRunningTime="2025-11-28 06:38:52.084425078 +0000 UTC m=+1054.673680658" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.348849 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-xjtfj" podUID="8c14d08e-06cc-409c-84d7-dad9fcfc4835" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.392985 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.393125 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.423116 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.582121 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data\") pod \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.582223 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wd79\" (UniqueName: \"kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79\") pod \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.582283 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle\") pod \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\" (UID: \"e97d232b-3a4f-4080-9943-b9e2c61b3d44\") " Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.589455 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79" (OuterVolumeSpecName: "kube-api-access-9wd79") pod "e97d232b-3a4f-4080-9943-b9e2c61b3d44" (UID: "e97d232b-3a4f-4080-9943-b9e2c61b3d44"). InnerVolumeSpecName "kube-api-access-9wd79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.614122 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e97d232b-3a4f-4080-9943-b9e2c61b3d44" (UID: "e97d232b-3a4f-4080-9943-b9e2c61b3d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.653008 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data" (OuterVolumeSpecName: "config-data") pod "e97d232b-3a4f-4080-9943-b9e2c61b3d44" (UID: "e97d232b-3a4f-4080-9943-b9e2c61b3d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.685130 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wd79\" (UniqueName: \"kubernetes.io/projected/e97d232b-3a4f-4080-9943-b9e2c61b3d44-kube-api-access-9wd79\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.685190 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:53 crc kubenswrapper[4955]: I1128 06:38:53.685210 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97d232b-3a4f-4080-9943-b9e2c61b3d44-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.053172 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l9zcs" event={"ID":"e97d232b-3a4f-4080-9943-b9e2c61b3d44","Type":"ContainerDied","Data":"33732a6b2dc8fb2a388e700a396e394c60fe6fd1c7418a2c249df2670b6401dd"} Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.053221 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33732a6b2dc8fb2a388e700a396e394c60fe6fd1c7418a2c249df2670b6401dd" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.053809 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l9zcs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.239125 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.243860 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="dnsmasq-dns" containerID="cri-o://a55b300a65c2ecbc5b4d07f8e26a16fa17d1583d445cd91905a6a7586edf7706" gracePeriod=10 Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.271869 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:54 crc kubenswrapper[4955]: E1128 06:38:54.272358 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97d232b-3a4f-4080-9943-b9e2c61b3d44" containerName="keystone-db-sync" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.272372 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97d232b-3a4f-4080-9943-b9e2c61b3d44" containerName="keystone-db-sync" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.272595 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97d232b-3a4f-4080-9943-b9e2c61b3d44" containerName="keystone-db-sync" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.273684 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.300932 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.316787 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8xz\" (UniqueName: \"kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.316861 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.317046 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.317116 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.317182 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.317222 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.363046 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-sr2pm"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.364362 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.367345 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.367585 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.367826 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.367969 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt79j" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.372793 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.380892 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sr2pm"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418108 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418183 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418201 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9lg4\" (UniqueName: \"kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418220 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418238 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418275 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418296 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8xz\" (UniqueName: \"kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418333 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418390 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.418415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.419329 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.420161 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.420498 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.425799 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.434150 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.480052 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.481785 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.491055 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.491317 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.492061 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-flvsh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.494057 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520019 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520092 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520173 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520195 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9lg4\" (UniqueName: \"kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520260 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520292 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520357 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520377 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.520433 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfff\" (UniqueName: \"kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.527477 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8xz\" (UniqueName: \"kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz\") pod \"dnsmasq-dns-5c5cc7c5ff-4d68p\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.530860 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.530917 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.534469 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.535105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.549620 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.562365 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.591458 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9lg4\" (UniqueName: \"kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4\") pod \"keystone-bootstrap-sr2pm\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.622623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.622665 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.622699 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.622735 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfff\" (UniqueName: \"kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.622790 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.625041 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.626156 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.626292 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.630704 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.654567 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.656471 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.663570 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ql7bs"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.664049 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.664325 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.664916 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.669330 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.669638 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.669761 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v4mtk" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.672082 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfff\" (UniqueName: \"kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff\") pod \"horizon-7bcf66475f-c4s6x\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.682703 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.691133 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.708632 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.709992 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.716110 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-d4bdh"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.717174 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.719821 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.720111 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7dntf" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.720289 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724188 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724283 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc58h\" (UniqueName: \"kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724388 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724422 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724477 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724538 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724568 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724589 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqw4\" (UniqueName: \"kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724693 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724713 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724756 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724808 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724834 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd72c\" (UniqueName: \"kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724847 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724864 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724901 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724949 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724972 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgvn\" (UniqueName: \"kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.724989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.729096 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.757069 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jqvkj"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.758186 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.764661 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-psgf7" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.764859 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.776769 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.787679 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ql7bs"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.806209 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.819588 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d4bdh"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826369 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826427 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc58h\" (UniqueName: \"kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826492 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826528 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826550 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826570 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826593 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826630 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzx7s\" (UniqueName: \"kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826647 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826663 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826687 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqw4\" (UniqueName: \"kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826719 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826743 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826770 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826788 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd72c\" (UniqueName: \"kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826802 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826873 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826890 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826907 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgvn\" (UniqueName: \"kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.826925 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.828117 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.829249 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.830753 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.831844 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.832297 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.834531 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.836287 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.836582 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.836875 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jqvkj"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.837288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.838029 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.849673 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.852243 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.852514 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.858802 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.859009 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.859491 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.859660 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.860432 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.865818 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgvn\" (UniqueName: \"kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn\") pod \"neutron-db-sync-ql7bs\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.866437 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqw4\" (UniqueName: \"kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4\") pod \"horizon-6c4dc88849-jtrxl\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.866497 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc58h\" (UniqueName: \"kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.868839 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data\") pod \"cinder-db-sync-d4bdh\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.870085 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd72c\" (UniqueName: \"kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c\") pod \"ceilometer-0\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " pod="openstack/ceilometer-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.872059 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.874999 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.889427 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.928420 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.928480 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.928549 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.928613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.928645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzx7s\" (UniqueName: \"kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.929410 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.931467 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.932163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.933787 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.933974 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5v7xh" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.935127 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.936568 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.946812 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.950488 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.964926 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.968414 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:54 crc kubenswrapper[4955]: I1128 06:38:54.981409 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-h7lw7"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.007164 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.014284 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzx7s\" (UniqueName: \"kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s\") pod \"placement-db-sync-jqvkj\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.014860 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7lw7"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.017269 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.017578 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.032874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.032947 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.032981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033082 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lrkrv" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033095 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033160 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033217 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6z4f\" (UniqueName: \"kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033408 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhw8\" (UniqueName: \"kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.033489 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.039742 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.053353 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.071974 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.072861 4955 generic.go:334] "Generic (PLEG): container finished" podID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerID="a55b300a65c2ecbc5b4d07f8e26a16fa17d1583d445cd91905a6a7586edf7706" exitCode=0 Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.072888 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" event={"ID":"2d8475a0-d399-462c-a6bf-bb1c70950652","Type":"ContainerDied","Data":"a55b300a65c2ecbc5b4d07f8e26a16fa17d1583d445cd91905a6a7586edf7706"} Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.100002 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jqvkj" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.127767 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139169 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139251 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139284 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139357 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6z4f\" (UniqueName: \"kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139404 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139452 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhw8\" (UniqueName: \"kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139474 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139498 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvtg\" (UniqueName: \"kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139600 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139659 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139693 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139710 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139728 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.139742 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.140423 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.140882 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.141110 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.141352 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.141891 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.147481 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.154268 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.170752 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6z4f\" (UniqueName: \"kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f\") pod \"barbican-db-sync-h7lw7\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.172153 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhw8\" (UniqueName: \"kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8\") pod \"dnsmasq-dns-8b5c85b87-f98v8\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.211134 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.241275 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4284t\" (UniqueName: \"kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.241637 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.241750 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.241884 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.241954 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242026 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb\") pod \"2d8475a0-d399-462c-a6bf-bb1c70950652\" (UID: \"2d8475a0-d399-462c-a6bf-bb1c70950652\") " Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242277 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvtg\" (UniqueName: \"kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242346 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242366 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242383 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242425 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242442 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242535 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.242574 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.244062 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.244250 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.246784 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.255037 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.258214 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvtg\" (UniqueName: \"kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.263969 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t" (OuterVolumeSpecName: "kube-api-access-4284t") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "kube-api-access-4284t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.270350 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.271606 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.272868 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.329631 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.330799 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.331055 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.331647 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.332958 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.345587 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.345623 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.345635 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4284t\" (UniqueName: \"kubernetes.io/projected/2d8475a0-d399-462c-a6bf-bb1c70950652-kube-api-access-4284t\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.345649 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.356003 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config" (OuterVolumeSpecName: "config") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.387472 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d8475a0-d399-462c-a6bf-bb1c70950652" (UID: "2d8475a0-d399-462c-a6bf-bb1c70950652"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.390355 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.454942 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.454972 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8475a0-d399-462c-a6bf-bb1c70950652-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.457303 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.571574 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-sr2pm"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.577728 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.585591 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:38:55 crc kubenswrapper[4955]: E1128 06:38:55.586062 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="init" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.586149 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="init" Nov 28 06:38:55 crc kubenswrapper[4955]: E1128 06:38:55.586245 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="dnsmasq-dns" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.586348 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="dnsmasq-dns" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.586647 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" containerName="dnsmasq-dns" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.587614 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: W1128 06:38:55.589200 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd6e5dc_ac7c_487b_b561_271cb25cf994.slice/crio-1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e WatchSource:0}: Error finding container 1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e: Status 404 returned error can't find the container with id 1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.589979 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.590153 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.595004 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.663847 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.663912 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.663967 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.664020 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.664073 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.664096 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.664133 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k5pk\" (UniqueName: \"kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.664162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.765690 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.765855 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.765889 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k5pk\" (UniqueName: \"kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.765923 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.765981 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.766005 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.766042 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.766086 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.766333 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.768370 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.769780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.776163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.776253 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.776553 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.782147 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.797596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.808041 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k5pk\" (UniqueName: \"kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk\") pod \"glance-default-internal-api-0\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.862596 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.887634 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ql7bs"] Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.953219 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:38:55 crc kubenswrapper[4955]: W1128 06:38:55.992710 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa661871_e4da_48a2_820b_3c5cec9e6ce0.slice/crio-9878bdd1f675e102456171b3f132ac7ebd3baa4c08882bb665f191575ab874ff WatchSource:0}: Error finding container 9878bdd1f675e102456171b3f132ac7ebd3baa4c08882bb665f191575ab874ff: Status 404 returned error can't find the container with id 9878bdd1f675e102456171b3f132ac7ebd3baa4c08882bb665f191575ab874ff Nov 28 06:38:55 crc kubenswrapper[4955]: I1128 06:38:55.996031 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:38:56 crc kubenswrapper[4955]: W1128 06:38:56.000451 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc39c6827_9dc3_482d_a268_8ba9348b925e.slice/crio-58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771 WatchSource:0}: Error finding container 58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771: Status 404 returned error can't find the container with id 58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771 Nov 28 06:38:56 crc kubenswrapper[4955]: W1128 06:38:56.006974 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae81b0d_293c_4ec2_8d42_8fb1fd0c1afb.slice/crio-f2fae82bd903a5c6b77aff568b3b1f0f10bc6ee86e783b5c1d2e9d03e752392e WatchSource:0}: Error finding container f2fae82bd903a5c6b77aff568b3b1f0f10bc6ee86e783b5c1d2e9d03e752392e: Status 404 returned error can't find the container with id f2fae82bd903a5c6b77aff568b3b1f0f10bc6ee86e783b5c1d2e9d03e752392e Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.009125 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.017667 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-d4bdh"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.093628 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sr2pm" event={"ID":"fcd6e5dc-ac7c-487b-b561-271cb25cf994","Type":"ContainerStarted","Data":"1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.108672 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" event={"ID":"2d8475a0-d399-462c-a6bf-bb1c70950652","Type":"ContainerDied","Data":"73b5d524bdcdbd347ac04e4c0d79b4071de521b2a4d1d3423e1227a7d0b1c45f"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.108716 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff5475cc9-wqb85" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.108749 4955 scope.go:117] "RemoveContainer" containerID="a55b300a65c2ecbc5b4d07f8e26a16fa17d1583d445cd91905a6a7586edf7706" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.111746 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" event={"ID":"aa661871-e4da-48a2-820b-3c5cec9e6ce0","Type":"ContainerStarted","Data":"9878bdd1f675e102456171b3f132ac7ebd3baa4c08882bb665f191575ab874ff"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.113583 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerStarted","Data":"f2fae82bd903a5c6b77aff568b3b1f0f10bc6ee86e783b5c1d2e9d03e752392e"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.119332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" event={"ID":"20bef675-6240-4267-827a-de2aedcc539e","Type":"ContainerStarted","Data":"aecc6c1648a80c9000ff02f8ee222fd6b75a9df71a7992365853ee5de6db3795"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.120973 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ql7bs" event={"ID":"2d0bb158-ce32-468c-a2cc-b99759e19390","Type":"ContainerStarted","Data":"aa5481513fe1d6cd79b93b114fb28a5f6e4ff93eb94c02b9e80d896d02ab7a32"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.124218 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerStarted","Data":"5f91a7f8aa0e07523531168cbb37476290c856d1675475b15d999e071b755288"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.135581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerStarted","Data":"9d71b51d312b5f7882438be7ed9a0ca2c9f96153fca7a7766a5a65d9d4d1a503"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.137740 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d4bdh" event={"ID":"c39c6827-9dc3-482d-a268-8ba9348b925e","Type":"ContainerStarted","Data":"58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771"} Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.140223 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.159176 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff5475cc9-wqb85"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.170062 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-h7lw7"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.179669 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jqvkj"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.444066 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.524546 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.572425 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.687446 4955 scope.go:117] "RemoveContainer" containerID="2dad466ed59a193308d44f154d7eab5702df649cae9b33614031e776562ef9ed" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.687631 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.689327 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.718591 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.763896 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.815827 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.825843 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfc8p\" (UniqueName: \"kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.825880 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.825931 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.825978 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.826009 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.917599 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.930233 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.930299 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.930330 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.930399 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfc8p\" (UniqueName: \"kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.930417 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.933833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.934320 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.934979 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.942463 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:56 crc kubenswrapper[4955]: I1128 06:38:56.991052 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfc8p\" (UniqueName: \"kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p\") pod \"horizon-647988d457-2xzbx\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.027759 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.164745 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ql7bs" event={"ID":"2d0bb158-ce32-468c-a2cc-b99759e19390","Type":"ContainerStarted","Data":"3e46318759576b68895285af0db24913171a9bf8d6f917f080b5a24b0f0ed32f"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.169892 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerStarted","Data":"0598eb3780386aba67211a680710b68c163c1b8afc9a6e002109b35cd4db7883"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.173613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jqvkj" event={"ID":"0a694432-dcc2-45d4-a492-f43f79169fc4","Type":"ContainerStarted","Data":"78ae2e57c99b20d15d7eab71487e5d54a6d07cd924b97d3ad819a5e56dac9917"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.175444 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerStarted","Data":"6265635efdf3d39a5561353a19aa8897a9d608ec04d78a09a88799d372aedb4c"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.177462 4955 generic.go:334] "Generic (PLEG): container finished" podID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerID="56c4ec81304643b558d8f0c823d25005e96d7610937daba4c458f597836edaf4" exitCode=0 Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.177525 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" event={"ID":"aa661871-e4da-48a2-820b-3c5cec9e6ce0","Type":"ContainerDied","Data":"56c4ec81304643b558d8f0c823d25005e96d7610937daba4c458f597836edaf4"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.202530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7lw7" event={"ID":"0ffeda94-da23-484b-b623-fe3101c66890","Type":"ContainerStarted","Data":"f5ac7dc7b4f4e599493322f9a8ff36837b60882118758548ff0042092d9644f2"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.202770 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ql7bs" podStartSLOduration=3.202749465 podStartE2EDuration="3.202749465s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:57.18642439 +0000 UTC m=+1059.775679950" watchObservedRunningTime="2025-11-28 06:38:57.202749465 +0000 UTC m=+1059.792005035" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.228401 4955 generic.go:334] "Generic (PLEG): container finished" podID="20bef675-6240-4267-827a-de2aedcc539e" containerID="86d50871016bb1bb5e7739561d3a0ba040d4a5a33a650b350c447ec2283ef3af" exitCode=0 Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.228483 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" event={"ID":"20bef675-6240-4267-827a-de2aedcc539e","Type":"ContainerDied","Data":"86d50871016bb1bb5e7739561d3a0ba040d4a5a33a650b350c447ec2283ef3af"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.236662 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sr2pm" event={"ID":"fcd6e5dc-ac7c-487b-b561-271cb25cf994","Type":"ContainerStarted","Data":"48c19e98866543e52c02336738fb6daeff44c92a9a75746616a3858dbadb54d5"} Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.301872 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-sr2pm" podStartSLOduration=3.301853816 podStartE2EDuration="3.301853816s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:57.294222799 +0000 UTC m=+1059.883478369" watchObservedRunningTime="2025-11-28 06:38:57.301853816 +0000 UTC m=+1059.891109386" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.321409 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.617017 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.732578 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8475a0-d399-462c-a6bf-bb1c70950652" path="/var/lib/kubelet/pods/2d8475a0-d399-462c-a6bf-bb1c70950652/volumes" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752230 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752361 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752429 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx8xz\" (UniqueName: \"kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752466 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752695 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.752771 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc\") pod \"20bef675-6240-4267-827a-de2aedcc539e\" (UID: \"20bef675-6240-4267-827a-de2aedcc539e\") " Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.776701 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz" (OuterVolumeSpecName: "kube-api-access-qx8xz") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "kube-api-access-qx8xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.784531 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.798658 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.805210 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.806440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.857898 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.857946 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.857955 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.857965 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx8xz\" (UniqueName: \"kubernetes.io/projected/20bef675-6240-4267-827a-de2aedcc539e-kube-api-access-qx8xz\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.857976 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.860333 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config" (OuterVolumeSpecName: "config") pod "20bef675-6240-4267-827a-de2aedcc539e" (UID: "20bef675-6240-4267-827a-de2aedcc539e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:38:57 crc kubenswrapper[4955]: I1128 06:38:57.960563 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20bef675-6240-4267-827a-de2aedcc539e-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.266633 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerStarted","Data":"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906"} Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.270457 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerStarted","Data":"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1"} Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.274778 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" event={"ID":"aa661871-e4da-48a2-820b-3c5cec9e6ce0","Type":"ContainerStarted","Data":"1d09310909f234e9400748e098724a349dcbf157ae5bad13c35aa2eca6a0010a"} Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.276482 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.279579 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647988d457-2xzbx" event={"ID":"5b0eeaa3-ddeb-42e3-af1f-249881515886","Type":"ContainerStarted","Data":"1fa81b60ade65b74256ecafe88a4e56e5fe17ebf4ed2ea22c0d3e85bf7f35551"} Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.286240 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" event={"ID":"20bef675-6240-4267-827a-de2aedcc539e","Type":"ContainerDied","Data":"aecc6c1648a80c9000ff02f8ee222fd6b75a9df71a7992365853ee5de6db3795"} Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.286309 4955 scope.go:117] "RemoveContainer" containerID="86d50871016bb1bb5e7739561d3a0ba040d4a5a33a650b350c447ec2283ef3af" Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.286576 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-4d68p" Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.319891 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" podStartSLOduration=4.319861978 podStartE2EDuration="4.319861978s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:58.300238149 +0000 UTC m=+1060.889493729" watchObservedRunningTime="2025-11-28 06:38:58.319861978 +0000 UTC m=+1060.909117548" Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.384717 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:58 crc kubenswrapper[4955]: I1128 06:38:58.395750 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-4d68p"] Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.302636 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerStarted","Data":"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d"} Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.302733 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-log" containerID="cri-o://fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" gracePeriod=30 Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.302778 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-httpd" containerID="cri-o://2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" gracePeriod=30 Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.310868 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-log" containerID="cri-o://3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" gracePeriod=30 Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.311162 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerStarted","Data":"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7"} Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.311222 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-httpd" containerID="cri-o://5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" gracePeriod=30 Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.335277 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.335252256 podStartE2EDuration="5.335252256s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:59.320785104 +0000 UTC m=+1061.910040684" watchObservedRunningTime="2025-11-28 06:38:59.335252256 +0000 UTC m=+1061.924507826" Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.346980 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.346957579 podStartE2EDuration="5.346957579s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:38:59.343945183 +0000 UTC m=+1061.933200753" watchObservedRunningTime="2025-11-28 06:38:59.346957579 +0000 UTC m=+1061.936213149" Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.727400 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20bef675-6240-4267-827a-de2aedcc539e" path="/var/lib/kubelet/pods/20bef675-6240-4267-827a-de2aedcc539e/volumes" Nov 28 06:38:59 crc kubenswrapper[4955]: I1128 06:38:59.993457 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.062407 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119704 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k5pk\" (UniqueName: \"kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119771 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119810 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119847 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119866 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.119942 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.120008 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.120044 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs\") pod \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\" (UID: \"c0c3b7e0-c3fb-4a27-be25-4a277233a507\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.120939 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.120980 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs" (OuterVolumeSpecName: "logs") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.125711 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.125715 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts" (OuterVolumeSpecName: "scripts") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.129654 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk" (OuterVolumeSpecName: "kube-api-access-9k5pk") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "kube-api-access-9k5pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.149853 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.177772 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.180836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data" (OuterVolumeSpecName: "config-data") pod "c0c3b7e0-c3fb-4a27-be25-4a277233a507" (UID: "c0c3b7e0-c3fb-4a27-be25-4a277233a507"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221647 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221721 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221769 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221795 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221843 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221938 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.221973 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvtg\" (UniqueName: \"kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222093 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run\") pod \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\" (UID: \"850cb0ec-b1af-497e-bb94-f6b4fd783ac7\") " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222488 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222538 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222553 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222565 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c0c3b7e0-c3fb-4a27-be25-4a277233a507-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222576 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222588 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k5pk\" (UniqueName: \"kubernetes.io/projected/c0c3b7e0-c3fb-4a27-be25-4a277233a507-kube-api-access-9k5pk\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222600 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.222612 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c0c3b7e0-c3fb-4a27-be25-4a277233a507-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.236770 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts" (OuterVolumeSpecName: "scripts") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.237573 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.237637 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs" (OuterVolumeSpecName: "logs") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.241427 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.241423 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg" (OuterVolumeSpecName: "kube-api-access-2wvtg") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "kube-api-access-2wvtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.249357 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.262696 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.287540 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.302999 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data" (OuterVolumeSpecName: "config-data") pod "850cb0ec-b1af-497e-bb94-f6b4fd783ac7" (UID: "850cb0ec-b1af-497e-bb94-f6b4fd783ac7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324254 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324311 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvtg\" (UniqueName: \"kubernetes.io/projected/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-kube-api-access-2wvtg\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324326 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324373 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324385 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324394 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324405 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324416 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850cb0ec-b1af-497e-bb94-f6b4fd783ac7-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.324431 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.344977 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348262 4955 generic.go:334] "Generic (PLEG): container finished" podID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerID="2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" exitCode=143 Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerDied","Data":"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348345 4955 generic.go:334] "Generic (PLEG): container finished" podID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerID="fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" exitCode=143 Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348366 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerDied","Data":"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348380 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c0c3b7e0-c3fb-4a27-be25-4a277233a507","Type":"ContainerDied","Data":"0598eb3780386aba67211a680710b68c163c1b8afc9a6e002109b35cd4db7883"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348298 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.348400 4955 scope.go:117] "RemoveContainer" containerID="2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.360880 4955 generic.go:334] "Generic (PLEG): container finished" podID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerID="5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" exitCode=143 Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.360932 4955 generic.go:334] "Generic (PLEG): container finished" podID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerID="3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" exitCode=143 Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.360929 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.360984 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerDied","Data":"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.361048 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerDied","Data":"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.361070 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"850cb0ec-b1af-497e-bb94-f6b4fd783ac7","Type":"ContainerDied","Data":"6265635efdf3d39a5561353a19aa8897a9d608ec04d78a09a88799d372aedb4c"} Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.390648 4955 scope.go:117] "RemoveContainer" containerID="fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.407771 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.431973 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.434434 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.453142 4955 scope.go:117] "RemoveContainer" containerID="2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.453975 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d\": container with ID starting with 2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d not found: ID does not exist" containerID="2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.454026 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d"} err="failed to get container status \"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d\": rpc error: code = NotFound desc = could not find container \"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d\": container with ID starting with 2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.454055 4955 scope.go:117] "RemoveContainer" containerID="fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.454372 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906\": container with ID starting with fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906 not found: ID does not exist" containerID="fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.454433 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906"} err="failed to get container status \"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906\": rpc error: code = NotFound desc = could not find container \"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906\": container with ID starting with fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.454465 4955 scope.go:117] "RemoveContainer" containerID="2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.455709 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d"} err="failed to get container status \"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d\": rpc error: code = NotFound desc = could not find container \"2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d\": container with ID starting with 2f8b74bd437ac23faece0ad866b05044e868dbc40df63519a581eb7d4d59fb1d not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.455758 4955 scope.go:117] "RemoveContainer" containerID="fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.455836 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.459724 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906"} err="failed to get container status \"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906\": rpc error: code = NotFound desc = could not find container \"fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906\": container with ID starting with fe48acb51d244bf59986c81e52723731f1e8c95e97aef5caf450b8063ad08906 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.459766 4955 scope.go:117] "RemoveContainer" containerID="5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.502303 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.511733 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.512318 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512343 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.512359 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512367 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.512389 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512398 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.512428 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512436 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.512460 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20bef675-6240-4267-827a-de2aedcc539e" containerName="init" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512469 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="20bef675-6240-4267-827a-de2aedcc539e" containerName="init" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512784 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="20bef675-6240-4267-827a-de2aedcc539e" containerName="init" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512807 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512827 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512842 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" containerName="glance-log" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.512859 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" containerName="glance-httpd" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.514048 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.517654 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.517826 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5v7xh" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.518361 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.518671 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.518696 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.518774 4955 scope.go:117] "RemoveContainer" containerID="3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.520397 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.523806 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.523928 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.530335 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.542374 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.565585 4955 scope.go:117] "RemoveContainer" containerID="5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.572601 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7\": container with ID starting with 5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7 not found: ID does not exist" containerID="5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.572649 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7"} err="failed to get container status \"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7\": rpc error: code = NotFound desc = could not find container \"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7\": container with ID starting with 5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.572693 4955 scope.go:117] "RemoveContainer" containerID="3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" Nov 28 06:39:00 crc kubenswrapper[4955]: E1128 06:39:00.573395 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1\": container with ID starting with 3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1 not found: ID does not exist" containerID="3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.573449 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1"} err="failed to get container status \"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1\": rpc error: code = NotFound desc = could not find container \"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1\": container with ID starting with 3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.573482 4955 scope.go:117] "RemoveContainer" containerID="5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.573905 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7"} err="failed to get container status \"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7\": rpc error: code = NotFound desc = could not find container \"5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7\": container with ID starting with 5db559ec352940eaf5355cedabeca7064b565723917a5f8cc936ff33f94782e7 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.573934 4955 scope.go:117] "RemoveContainer" containerID="3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.574266 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1"} err="failed to get container status \"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1\": rpc error: code = NotFound desc = could not find container \"3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1\": container with ID starting with 3357332e8c537c43a06d0ef3ce9a88c51fdb87832fddf5fad8889eebaaf0b5a1 not found: ID does not exist" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635237 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635291 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635325 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635357 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635384 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635568 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635668 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635717 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635795 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635815 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fckbh\" (UniqueName: \"kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635863 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.635938 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.636012 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.636026 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.636045 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfrxn\" (UniqueName: \"kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741381 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741431 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741458 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfrxn\" (UniqueName: \"kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741553 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741575 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741631 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741663 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741718 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741747 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741809 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741847 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.741924 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.743576 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.743630 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fckbh\" (UniqueName: \"kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.743660 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.743729 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.744658 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.744793 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.745155 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.747193 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.748033 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.751834 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.752738 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.754316 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.755703 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.757141 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.758300 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.759487 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.763233 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.763593 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.764274 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfrxn\" (UniqueName: \"kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.774409 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fckbh\" (UniqueName: \"kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.783830 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.791465 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.839033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:00 crc kubenswrapper[4955]: I1128 06:39:00.848231 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:01 crc kubenswrapper[4955]: I1128 06:39:01.715572 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850cb0ec-b1af-497e-bb94-f6b4fd783ac7" path="/var/lib/kubelet/pods/850cb0ec-b1af-497e-bb94-f6b4fd783ac7/volumes" Nov 28 06:39:01 crc kubenswrapper[4955]: I1128 06:39:01.716351 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c3b7e0-c3fb-4a27-be25-4a277233a507" path="/var/lib/kubelet/pods/c0c3b7e0-c3fb-4a27-be25-4a277233a507/volumes" Nov 28 06:39:02 crc kubenswrapper[4955]: I1128 06:39:02.389292 4955 generic.go:334] "Generic (PLEG): container finished" podID="fcd6e5dc-ac7c-487b-b561-271cb25cf994" containerID="48c19e98866543e52c02336738fb6daeff44c92a9a75746616a3858dbadb54d5" exitCode=0 Nov 28 06:39:02 crc kubenswrapper[4955]: I1128 06:39:02.389386 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sr2pm" event={"ID":"fcd6e5dc-ac7c-487b-b561-271cb25cf994","Type":"ContainerDied","Data":"48c19e98866543e52c02336738fb6daeff44c92a9a75746616a3858dbadb54d5"} Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.514287 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.579874 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.584611 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.601683 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.602574 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622785 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622833 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622901 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622923 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454p8\" (UniqueName: \"kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622957 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.622977 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.733058 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-454p8\" (UniqueName: \"kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.735138 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.738836 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.739033 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.739136 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.740861 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.742194 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.742349 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.750091 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.758409 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.767752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.767817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.779732 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.780226 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-454p8\" (UniqueName: \"kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.783067 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs\") pod \"horizon-9c465b4d8-cslvv\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:03 crc kubenswrapper[4955]: I1128 06:39:03.844796 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.859487 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56f45c5b6-nqg9b"] Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.861004 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.867391 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.875259 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f45c5b6-nqg9b"] Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.943677 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.970690 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-scripts\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.970767 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-tls-certs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.970949 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-secret-key\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.971010 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-config-data\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.971190 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-combined-ca-bundle\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.971265 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-logs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:03.971368 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzpp\" (UniqueName: \"kubernetes.io/projected/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-kube-api-access-nrzpp\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.074830 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzpp\" (UniqueName: \"kubernetes.io/projected/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-kube-api-access-nrzpp\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075160 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-scripts\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075189 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-tls-certs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075216 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-secret-key\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075233 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-config-data\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075294 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-combined-ca-bundle\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075325 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-logs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.075994 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-logs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.076696 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-scripts\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.079627 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-config-data\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.081236 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-secret-key\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.083750 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-horizon-tls-certs\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.085163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-combined-ca-bundle\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.100987 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzpp\" (UniqueName: \"kubernetes.io/projected/0540bb1f-c904-4b07-acda-ce47d0bdfa7c-kube-api-access-nrzpp\") pod \"horizon-56f45c5b6-nqg9b\" (UID: \"0540bb1f-c904-4b07-acda-ce47d0bdfa7c\") " pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:04 crc kubenswrapper[4955]: I1128 06:39:04.206952 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:05 crc kubenswrapper[4955]: I1128 06:39:05.213948 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:39:05 crc kubenswrapper[4955]: I1128 06:39:05.270168 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:39:05 crc kubenswrapper[4955]: I1128 06:39:05.270401 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" containerID="cri-o://1c34a246ccc9c4e2ba00f6bf4c81ffba04429c3d33460b41b640aff44468e9cc" gracePeriod=10 Nov 28 06:39:05 crc kubenswrapper[4955]: I1128 06:39:05.355597 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Nov 28 06:39:06 crc kubenswrapper[4955]: I1128 06:39:06.439421 4955 generic.go:334] "Generic (PLEG): container finished" podID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerID="1c34a246ccc9c4e2ba00f6bf4c81ffba04429c3d33460b41b640aff44468e9cc" exitCode=0 Nov 28 06:39:06 crc kubenswrapper[4955]: I1128 06:39:06.439473 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" event={"ID":"a7418d4f-d4f8-4e84-a59c-2f7025a856b3","Type":"ContainerDied","Data":"1c34a246ccc9c4e2ba00f6bf4c81ffba04429c3d33460b41b640aff44468e9cc"} Nov 28 06:39:10 crc kubenswrapper[4955]: I1128 06:39:10.352879 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Nov 28 06:39:13 crc kubenswrapper[4955]: E1128 06:39:13.920305 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 28 06:39:13 crc kubenswrapper[4955]: E1128 06:39:13.920801 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n646h645h5bfh5c7h588h5c4h5h576h649hb9h64h658h659h5d6h67bh586h6h588h5c4h65dhb4h7h5d9hb8h55bh6h7bh5cch5dh594h7hcfq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd72c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:39:13 crc kubenswrapper[4955]: E1128 06:39:13.939923 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 28 06:39:13 crc kubenswrapper[4955]: E1128 06:39:13.940094 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n575h588h5d5h6fh59dh55ch668hd4h5bdhbdhbdh689h9bhf6h668h546h648h7bh57h65ch9chbch548h66fh5d7hdbh589h7ch5b4hf6h58fh98q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfc8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-647988d457-2xzbx_openstack(5b0eeaa3-ddeb-42e3-af1f-249881515886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:39:13 crc kubenswrapper[4955]: E1128 06:39:13.944703 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-647988d457-2xzbx" podUID="5b0eeaa3-ddeb-42e3-af1f-249881515886" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.050039 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213655 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213777 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213855 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9lg4\" (UniqueName: \"kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213899 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213935 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.213968 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys\") pod \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\" (UID: \"fcd6e5dc-ac7c-487b-b561-271cb25cf994\") " Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.230242 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.242031 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts" (OuterVolumeSpecName: "scripts") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.243567 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.249335 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4" (OuterVolumeSpecName: "kube-api-access-w9lg4") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "kube-api-access-w9lg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.262993 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.282944 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data" (OuterVolumeSpecName: "config-data") pod "fcd6e5dc-ac7c-487b-b561-271cb25cf994" (UID: "fcd6e5dc-ac7c-487b-b561-271cb25cf994"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315561 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315591 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315606 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9lg4\" (UniqueName: \"kubernetes.io/projected/fcd6e5dc-ac7c-487b-b561-271cb25cf994-kube-api-access-w9lg4\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315617 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315628 4955 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.315639 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fcd6e5dc-ac7c-487b-b561-271cb25cf994-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.399427 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.518397 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-sr2pm" Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.518532 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-sr2pm" event={"ID":"fcd6e5dc-ac7c-487b-b561-271cb25cf994","Type":"ContainerDied","Data":"1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e"} Nov 28 06:39:14 crc kubenswrapper[4955]: I1128 06:39:14.518569 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c63f61cac2b56018241f232d8d1ba4a579b88694c937b8a5888d6054f0a786e" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.136970 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-sr2pm"] Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.144327 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-sr2pm"] Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.230185 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-44sl9"] Nov 28 06:39:15 crc kubenswrapper[4955]: E1128 06:39:15.230684 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd6e5dc-ac7c-487b-b561-271cb25cf994" containerName="keystone-bootstrap" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.230708 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd6e5dc-ac7c-487b-b561-271cb25cf994" containerName="keystone-bootstrap" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.230951 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd6e5dc-ac7c-487b-b561-271cb25cf994" containerName="keystone-bootstrap" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.231623 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.233532 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.233661 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.234133 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.234319 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.234571 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt79j" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.253019 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-44sl9"] Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339247 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzcqj\" (UniqueName: \"kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339291 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339317 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339539 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339677 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.339784 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441257 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441315 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441351 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441455 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzcqj\" (UniqueName: \"kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441479 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.441526 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.445599 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.446019 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.446475 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.446722 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.447424 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.460176 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzcqj\" (UniqueName: \"kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj\") pod \"keystone-bootstrap-44sl9\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.556259 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:15 crc kubenswrapper[4955]: I1128 06:39:15.713078 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd6e5dc-ac7c-487b-b561-271cb25cf994" path="/var/lib/kubelet/pods/fcd6e5dc-ac7c-487b-b561-271cb25cf994/volumes" Nov 28 06:39:19 crc kubenswrapper[4955]: I1128 06:39:19.564252 4955 generic.go:334] "Generic (PLEG): container finished" podID="2d0bb158-ce32-468c-a2cc-b99759e19390" containerID="3e46318759576b68895285af0db24913171a9bf8d6f917f080b5a24b0f0ed32f" exitCode=0 Nov 28 06:39:19 crc kubenswrapper[4955]: I1128 06:39:19.564333 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ql7bs" event={"ID":"2d0bb158-ce32-468c-a2cc-b99759e19390","Type":"ContainerDied","Data":"3e46318759576b68895285af0db24913171a9bf8d6f917f080b5a24b0f0ed32f"} Nov 28 06:39:20 crc kubenswrapper[4955]: I1128 06:39:20.353685 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Nov 28 06:39:20 crc kubenswrapper[4955]: I1128 06:39:20.354044 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:39:21 crc kubenswrapper[4955]: E1128 06:39:21.674175 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 28 06:39:21 crc kubenswrapper[4955]: E1128 06:39:21.674917 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6z4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-h7lw7_openstack(0ffeda94-da23-484b-b623-fe3101c66890): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:39:21 crc kubenswrapper[4955]: E1128 06:39:21.676418 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-h7lw7" podUID="0ffeda94-da23-484b-b623-fe3101c66890" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.863935 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.874299 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.875770 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956045 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6pmm\" (UniqueName: \"kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956151 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956223 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts\") pod \"5b0eeaa3-ddeb-42e3-af1f-249881515886\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956254 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs\") pod \"5b0eeaa3-ddeb-42e3-af1f-249881515886\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956276 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956348 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmgvn\" (UniqueName: \"kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn\") pod \"2d0bb158-ce32-468c-a2cc-b99759e19390\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956377 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key\") pod \"5b0eeaa3-ddeb-42e3-af1f-249881515886\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956401 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data\") pod \"5b0eeaa3-ddeb-42e3-af1f-249881515886\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956422 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956440 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956463 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle\") pod \"2d0bb158-ce32-468c-a2cc-b99759e19390\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956487 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfc8p\" (UniqueName: \"kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p\") pod \"5b0eeaa3-ddeb-42e3-af1f-249881515886\" (UID: \"5b0eeaa3-ddeb-42e3-af1f-249881515886\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956550 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config\") pod \"2d0bb158-ce32-468c-a2cc-b99759e19390\" (UID: \"2d0bb158-ce32-468c-a2cc-b99759e19390\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.956580 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0\") pod \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\" (UID: \"a7418d4f-d4f8-4e84-a59c-2f7025a856b3\") " Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.961982 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts" (OuterVolumeSpecName: "scripts") pod "5b0eeaa3-ddeb-42e3-af1f-249881515886" (UID: "5b0eeaa3-ddeb-42e3-af1f-249881515886"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.962280 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data" (OuterVolumeSpecName: "config-data") pod "5b0eeaa3-ddeb-42e3-af1f-249881515886" (UID: "5b0eeaa3-ddeb-42e3-af1f-249881515886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.963250 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs" (OuterVolumeSpecName: "logs") pod "5b0eeaa3-ddeb-42e3-af1f-249881515886" (UID: "5b0eeaa3-ddeb-42e3-af1f-249881515886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.964148 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p" (OuterVolumeSpecName: "kube-api-access-tfc8p") pod "5b0eeaa3-ddeb-42e3-af1f-249881515886" (UID: "5b0eeaa3-ddeb-42e3-af1f-249881515886"). InnerVolumeSpecName "kube-api-access-tfc8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.964205 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm" (OuterVolumeSpecName: "kube-api-access-z6pmm") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "kube-api-access-z6pmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.964279 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn" (OuterVolumeSpecName: "kube-api-access-bmgvn") pod "2d0bb158-ce32-468c-a2cc-b99759e19390" (UID: "2d0bb158-ce32-468c-a2cc-b99759e19390"). InnerVolumeSpecName "kube-api-access-bmgvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.964878 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5b0eeaa3-ddeb-42e3-af1f-249881515886" (UID: "5b0eeaa3-ddeb-42e3-af1f-249881515886"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:21 crc kubenswrapper[4955]: I1128 06:39:21.990735 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config" (OuterVolumeSpecName: "config") pod "2d0bb158-ce32-468c-a2cc-b99759e19390" (UID: "2d0bb158-ce32-468c-a2cc-b99759e19390"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.003047 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d0bb158-ce32-468c-a2cc-b99759e19390" (UID: "2d0bb158-ce32-468c-a2cc-b99759e19390"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.003203 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.010452 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.018766 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.020551 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.031798 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config" (OuterVolumeSpecName: "config") pod "a7418d4f-d4f8-4e84-a59c-2f7025a856b3" (UID: "a7418d4f-d4f8-4e84-a59c-2f7025a856b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.058491 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.058971 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmgvn\" (UniqueName: \"kubernetes.io/projected/2d0bb158-ce32-468c-a2cc-b99759e19390-kube-api-access-bmgvn\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059034 4955 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5b0eeaa3-ddeb-42e3-af1f-249881515886-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059126 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059247 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059301 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059378 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059517 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfc8p\" (UniqueName: \"kubernetes.io/projected/5b0eeaa3-ddeb-42e3-af1f-249881515886-kube-api-access-tfc8p\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059599 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2d0bb158-ce32-468c-a2cc-b99759e19390-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059697 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059770 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6pmm\" (UniqueName: \"kubernetes.io/projected/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-kube-api-access-z6pmm\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059846 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7418d4f-d4f8-4e84-a59c-2f7025a856b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.059932 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5b0eeaa3-ddeb-42e3-af1f-249881515886-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.060016 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b0eeaa3-ddeb-42e3-af1f-249881515886-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.201529 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.593774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"edf21a9b-612c-4fa2-a439-5f05b11606bc","Type":"ContainerStarted","Data":"2eb05fcedd984f6f35c1fbf6ffd504a2feb36bf4c927a35c30286cdffbd113cf"} Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.596067 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ql7bs" event={"ID":"2d0bb158-ce32-468c-a2cc-b99759e19390","Type":"ContainerDied","Data":"aa5481513fe1d6cd79b93b114fb28a5f6e4ff93eb94c02b9e80d896d02ab7a32"} Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.596087 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa5481513fe1d6cd79b93b114fb28a5f6e4ff93eb94c02b9e80d896d02ab7a32" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.596490 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ql7bs" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.597174 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-647988d457-2xzbx" event={"ID":"5b0eeaa3-ddeb-42e3-af1f-249881515886","Type":"ContainerDied","Data":"1fa81b60ade65b74256ecafe88a4e56e5fe17ebf4ed2ea22c0d3e85bf7f35551"} Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.597419 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-647988d457-2xzbx" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.599949 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.599957 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" event={"ID":"a7418d4f-d4f8-4e84-a59c-2f7025a856b3","Type":"ContainerDied","Data":"9eae7845ed4913899b66cfe900fe60c0a4c1178deb24a8c1a2c6153a182a3a93"} Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.600050 4955 scope.go:117] "RemoveContainer" containerID="1c34a246ccc9c4e2ba00f6bf4c81ffba04429c3d33460b41b640aff44468e9cc" Nov 28 06:39:22 crc kubenswrapper[4955]: E1128 06:39:22.600948 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-h7lw7" podUID="0ffeda94-da23-484b-b623-fe3101c66890" Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.673720 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.690651 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-647988d457-2xzbx"] Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.698978 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:39:22 crc kubenswrapper[4955]: I1128 06:39:22.706073 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-dwjhz"] Nov 28 06:39:22 crc kubenswrapper[4955]: W1128 06:39:22.985721 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod541ddc3e_f13a_4d19_9d07_5b1897c10957.slice/crio-b2dbcdc49d7569e04666889acd886d64b0c53701a811f080393c1ffe8a25e012 WatchSource:0}: Error finding container b2dbcdc49d7569e04666889acd886d64b0c53701a811f080393c1ffe8a25e012: Status 404 returned error can't find the container with id b2dbcdc49d7569e04666889acd886d64b0c53701a811f080393c1ffe8a25e012 Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.038791 4955 scope.go:117] "RemoveContainer" containerID="5b0de1c1b2164247f1ee0ca3fe5b4a0b62b5ed56e3dcd2394c564b503a25d22a" Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.040631 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.040775 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pc58h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-d4bdh_openstack(c39c6827-9dc3-482d-a268-8ba9348b925e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.042197 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-d4bdh" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.042482 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.042819 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="init" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.042831 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="init" Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.042846 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.042852 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.042869 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0bb158-ce32-468c-a2cc-b99759e19390" containerName="neutron-db-sync" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.042876 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0bb158-ce32-468c-a2cc-b99759e19390" containerName="neutron-db-sync" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.043202 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0bb158-ce32-468c-a2cc-b99759e19390" containerName="neutron-db-sync" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.043214 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.044297 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.083639 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.158698 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.162591 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.169009 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.169116 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.169328 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v4mtk" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.169426 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.189679 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.189796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.189959 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.189992 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cjmx\" (UniqueName: \"kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.190027 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.190062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.191066 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.291799 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fvq\" (UniqueName: \"kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292101 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292151 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292177 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292227 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292272 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cjmx\" (UniqueName: \"kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292293 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292330 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.292353 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.293143 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.293147 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.293290 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.293358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.293368 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.310945 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cjmx\" (UniqueName: \"kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx\") pod \"dnsmasq-dns-84b966f6c9-8mlfs\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.393381 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.393425 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.397403 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.397464 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.397492 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.397554 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fvq\" (UniqueName: \"kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.397607 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.401690 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.405458 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.406158 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.406607 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.412886 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fvq\" (UniqueName: \"kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq\") pod \"neutron-76f57c54dd-27284\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.492472 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.498040 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.571572 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f45c5b6-nqg9b"] Nov 28 06:39:23 crc kubenswrapper[4955]: W1128 06:39:23.607619 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0540bb1f_c904_4b07_acda_ce47d0bdfa7c.slice/crio-1f2e32700001365f75da394b2b84de6c18569c4de4cb2a59331c48d2c8d76cf5 WatchSource:0}: Error finding container 1f2e32700001365f75da394b2b84de6c18569c4de4cb2a59331c48d2c8d76cf5: Status 404 returned error can't find the container with id 1f2e32700001365f75da394b2b84de6c18569c4de4cb2a59331c48d2c8d76cf5 Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.615072 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerStarted","Data":"b2dbcdc49d7569e04666889acd886d64b0c53701a811f080393c1ffe8a25e012"} Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.634588 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:39:23 crc kubenswrapper[4955]: E1128 06:39:23.667348 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-d4bdh" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.733640 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0eeaa3-ddeb-42e3-af1f-249881515886" path="/var/lib/kubelet/pods/5b0eeaa3-ddeb-42e3-af1f-249881515886/volumes" Nov 28 06:39:23 crc kubenswrapper[4955]: I1128 06:39:23.734305 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" path="/var/lib/kubelet/pods/a7418d4f-d4f8-4e84-a59c-2f7025a856b3/volumes" Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.077236 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-44sl9"] Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.129988 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.399671 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.643341 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-44sl9" event={"ID":"bddb6574-5273-410a-93aa-5293a16dfeba","Type":"ContainerStarted","Data":"9382abf70a398fe8d51cd5f1613349f40e94b7d715e98cdd1de46edc67f0a9ac"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.643596 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-44sl9" event={"ID":"bddb6574-5273-410a-93aa-5293a16dfeba","Type":"ContainerStarted","Data":"7ba094ff2b67e08488c0046c851ef17e78856735b0be9e2f832f4f7f4ee39b98"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.658800 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f45c5b6-nqg9b" event={"ID":"0540bb1f-c904-4b07-acda-ce47d0bdfa7c","Type":"ContainerStarted","Data":"920b5ab37fa08502dfb12c0b0c98137f9cbd1e3469f989feaec103076b5126ea"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.659031 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f45c5b6-nqg9b" event={"ID":"0540bb1f-c904-4b07-acda-ce47d0bdfa7c","Type":"ContainerStarted","Data":"6b6cac6679b4945c945c89551509c828b4ca8bb9e2a43433208349c76dfa9396"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.659151 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f45c5b6-nqg9b" event={"ID":"0540bb1f-c904-4b07-acda-ce47d0bdfa7c","Type":"ContainerStarted","Data":"1f2e32700001365f75da394b2b84de6c18569c4de4cb2a59331c48d2c8d76cf5"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.663451 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-44sl9" podStartSLOduration=9.663429107 podStartE2EDuration="9.663429107s" podCreationTimestamp="2025-11-28 06:39:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:24.662167271 +0000 UTC m=+1087.251422861" watchObservedRunningTime="2025-11-28 06:39:24.663429107 +0000 UTC m=+1087.252684677" Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.670415 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerStarted","Data":"c51d62fb4c5ee6832d2e95611a72db7923d2f0d61cb80a135e6e343c7c7fa2db"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.683005 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerStarted","Data":"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.689005 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerStarted","Data":"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.689244 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerStarted","Data":"0f649e74350aab47b349880065239a1b6b55e80aed6a5dcf364a2c85f1a3d45e"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.691897 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56f45c5b6-nqg9b" podStartSLOduration=21.691882277 podStartE2EDuration="21.691882277s" podCreationTimestamp="2025-11-28 06:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:24.686077021 +0000 UTC m=+1087.275332601" watchObservedRunningTime="2025-11-28 06:39:24.691882277 +0000 UTC m=+1087.281137847" Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.700821 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerStarted","Data":"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.702207 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jqvkj" event={"ID":"0a694432-dcc2-45d4-a492-f43f79169fc4","Type":"ContainerStarted","Data":"a5ed897781d3623593458af5cd707583a496e16dc664b58fcb2d35086cffc6f6"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.705431 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" event={"ID":"00785eeb-47d1-4a5e-9c69-489af5075748","Type":"ContainerStarted","Data":"318962e847110c6253a605d800f59e7f691928c203102735863cf0ebfc646c33"} Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.709030 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:39:24 crc kubenswrapper[4955]: I1128 06:39:24.724416 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jqvkj" podStartSLOduration=4.406722271 podStartE2EDuration="30.724401972s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="2025-11-28 06:38:56.724579951 +0000 UTC m=+1059.313835521" lastFinishedPulling="2025-11-28 06:39:23.042259652 +0000 UTC m=+1085.631515222" observedRunningTime="2025-11-28 06:39:24.722141738 +0000 UTC m=+1087.311397308" watchObservedRunningTime="2025-11-28 06:39:24.724401972 +0000 UTC m=+1087.313657542" Nov 28 06:39:24 crc kubenswrapper[4955]: W1128 06:39:24.728956 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f0f65a_7f13_4483_9806_7fa8d2738a27.slice/crio-4995f313a244b8354071b34ba2655aae9621a7de5dcef26d180b250334246633 WatchSource:0}: Error finding container 4995f313a244b8354071b34ba2655aae9621a7de5dcef26d180b250334246633: Status 404 returned error can't find the container with id 4995f313a244b8354071b34ba2655aae9621a7de5dcef26d180b250334246633 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.354916 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-dwjhz" podUID="a7418d4f-d4f8-4e84-a59c-2f7025a856b3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: i/o timeout" Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.780938 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerStarted","Data":"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.806753 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerStarted","Data":"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.806941 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c4dc88849-jtrxl" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon-log" containerID="cri-o://052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.807220 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c4dc88849-jtrxl" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon" containerID="cri-o://768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.823883 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-9c465b4d8-cslvv" podStartSLOduration=22.823860914 podStartE2EDuration="22.823860914s" podCreationTimestamp="2025-11-28 06:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:25.814144217 +0000 UTC m=+1088.403399787" watchObservedRunningTime="2025-11-28 06:39:25.823860914 +0000 UTC m=+1088.413116484" Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.829801 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerStarted","Data":"76b9c94ae0d7f0f9995f107a2b75635258e264d11828adba5c4997f976d10f3b"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.829914 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerStarted","Data":"0580084a5e1e9d160950ef212cc3a01146f7db7944d1a655618cc2cea72da792"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.829979 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerStarted","Data":"4995f313a244b8354071b34ba2655aae9621a7de5dcef26d180b250334246633"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.830123 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.876134 4955 generic.go:334] "Generic (PLEG): container finished" podID="00785eeb-47d1-4a5e-9c69-489af5075748" containerID="9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457" exitCode=0 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.876630 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" event={"ID":"00785eeb-47d1-4a5e-9c69-489af5075748","Type":"ContainerDied","Data":"9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.885315 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c4dc88849-jtrxl" podStartSLOduration=4.600761476 podStartE2EDuration="31.885295033s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="2025-11-28 06:38:55.882331353 +0000 UTC m=+1058.471586923" lastFinishedPulling="2025-11-28 06:39:23.16686491 +0000 UTC m=+1085.756120480" observedRunningTime="2025-11-28 06:39:25.858217822 +0000 UTC m=+1088.447473392" watchObservedRunningTime="2025-11-28 06:39:25.885295033 +0000 UTC m=+1088.474550603" Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.897746 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerStarted","Data":"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.897945 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bcf66475f-c4s6x" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon-log" containerID="cri-o://9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.898215 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bcf66475f-c4s6x" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon" containerID="cri-o://19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.914680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerStarted","Data":"d58a523f0cf3ee48aca6ceda130fd2a84743df01a30addb7ecfd48a841af1c3f"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.917582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"edf21a9b-612c-4fa2-a439-5f05b11606bc","Type":"ContainerStarted","Data":"963efd33a47443ec39e590c4e74cab10ee3ded240ba599ca5f9db5eb87e3f4c0"} Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.918220 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-httpd" containerID="cri-o://2e704c2b5886e7eefa3ae80885a63a19298b86603c4f888c9b2078ba91241f88" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.916935 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-log" containerID="cri-o://963efd33a47443ec39e590c4e74cab10ee3ded240ba599ca5f9db5eb87e3f4c0" gracePeriod=30 Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.942420 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6478fb8469-kzjkp"] Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.957675 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-76f57c54dd-27284" podStartSLOduration=2.957647583 podStartE2EDuration="2.957647583s" podCreationTimestamp="2025-11-28 06:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:25.885871499 +0000 UTC m=+1088.475127089" watchObservedRunningTime="2025-11-28 06:39:25.957647583 +0000 UTC m=+1088.546903153" Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.959466 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6478fb8469-kzjkp"] Nov 28 06:39:25 crc kubenswrapper[4955]: I1128 06:39:25.963021 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.015675 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.016098 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.114892 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-httpd-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115064 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-combined-ca-bundle\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115174 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-ovndb-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115542 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-internal-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115581 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115600 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-public-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.115640 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvwz\" (UniqueName: \"kubernetes.io/projected/8716e967-61aa-43b9-9d68-cb6699c5c673-kube-api-access-fcvwz\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.122332 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bcf66475f-c4s6x" podStartSLOduration=4.572674006 podStartE2EDuration="32.12229616s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="2025-11-28 06:38:55.467726179 +0000 UTC m=+1058.056981749" lastFinishedPulling="2025-11-28 06:39:23.017348333 +0000 UTC m=+1085.606603903" observedRunningTime="2025-11-28 06:39:26.041376496 +0000 UTC m=+1088.630632076" watchObservedRunningTime="2025-11-28 06:39:26.12229616 +0000 UTC m=+1088.711551730" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.152275 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.152246023 podStartE2EDuration="26.152246023s" podCreationTimestamp="2025-11-28 06:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:26.088050005 +0000 UTC m=+1088.677305575" watchObservedRunningTime="2025-11-28 06:39:26.152246023 +0000 UTC m=+1088.741501593" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217343 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-internal-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217393 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217414 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-public-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217451 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvwz\" (UniqueName: \"kubernetes.io/projected/8716e967-61aa-43b9-9d68-cb6699c5c673-kube-api-access-fcvwz\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217514 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-httpd-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217593 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-combined-ca-bundle\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.217647 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-ovndb-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.222177 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-ovndb-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.222288 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-internal-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.222354 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.223238 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-httpd-config\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.224219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-public-tls-certs\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.226179 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8716e967-61aa-43b9-9d68-cb6699c5c673-combined-ca-bundle\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.244810 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvwz\" (UniqueName: \"kubernetes.io/projected/8716e967-61aa-43b9-9d68-cb6699c5c673-kube-api-access-fcvwz\") pod \"neutron-6478fb8469-kzjkp\" (UID: \"8716e967-61aa-43b9-9d68-cb6699c5c673\") " pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.403608 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.943830 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" event={"ID":"00785eeb-47d1-4a5e-9c69-489af5075748","Type":"ContainerStarted","Data":"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504"} Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.953568 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerStarted","Data":"51bbeec0374ec4ab63e44f6776389e37691f9370d7fd70c3857fcc9a4f70e1d1"} Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.971198 4955 generic.go:334] "Generic (PLEG): container finished" podID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerID="2e704c2b5886e7eefa3ae80885a63a19298b86603c4f888c9b2078ba91241f88" exitCode=143 Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.971224 4955 generic.go:334] "Generic (PLEG): container finished" podID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerID="963efd33a47443ec39e590c4e74cab10ee3ded240ba599ca5f9db5eb87e3f4c0" exitCode=143 Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.971239 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"edf21a9b-612c-4fa2-a439-5f05b11606bc","Type":"ContainerDied","Data":"2e704c2b5886e7eefa3ae80885a63a19298b86603c4f888c9b2078ba91241f88"} Nov 28 06:39:26 crc kubenswrapper[4955]: I1128 06:39:26.971284 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"edf21a9b-612c-4fa2-a439-5f05b11606bc","Type":"ContainerDied","Data":"963efd33a47443ec39e590c4e74cab10ee3ded240ba599ca5f9db5eb87e3f4c0"} Nov 28 06:39:27 crc kubenswrapper[4955]: I1128 06:39:27.035344 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6478fb8469-kzjkp"] Nov 28 06:39:27 crc kubenswrapper[4955]: I1128 06:39:27.998614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6478fb8469-kzjkp" event={"ID":"8716e967-61aa-43b9-9d68-cb6699c5c673","Type":"ContainerStarted","Data":"8cdf6089bf37fdcc38657ee948ccd7a5d5eb535d006fd2f711ec38e95273ede7"} Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.008693 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.008689 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-log" containerID="cri-o://d58a523f0cf3ee48aca6ceda130fd2a84743df01a30addb7ecfd48a841af1c3f" gracePeriod=30 Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.008720 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-httpd" containerID="cri-o://51bbeec0374ec4ab63e44f6776389e37691f9370d7fd70c3857fcc9a4f70e1d1" gracePeriod=30 Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.053705 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=29.053677955 podStartE2EDuration="29.053677955s" podCreationTimestamp="2025-11-28 06:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:29.046559762 +0000 UTC m=+1091.635815363" watchObservedRunningTime="2025-11-28 06:39:29.053677955 +0000 UTC m=+1091.642933525" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.098649 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" podStartSLOduration=6.098630655 podStartE2EDuration="6.098630655s" podCreationTimestamp="2025-11-28 06:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:29.07739711 +0000 UTC m=+1091.666652810" watchObservedRunningTime="2025-11-28 06:39:29.098630655 +0000 UTC m=+1091.687886225" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.619669 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.738664 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.738857 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.738980 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfrxn\" (UniqueName: \"kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.739030 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.739063 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.739154 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.739191 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.739221 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data\") pod \"edf21a9b-612c-4fa2-a439-5f05b11606bc\" (UID: \"edf21a9b-612c-4fa2-a439-5f05b11606bc\") " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.740838 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.741575 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs" (OuterVolumeSpecName: "logs") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.748646 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.749052 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn" (OuterVolumeSpecName: "kube-api-access-pfrxn") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "kube-api-access-pfrxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.748718 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts" (OuterVolumeSpecName: "scripts") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.797713 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.819883 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.827709 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data" (OuterVolumeSpecName: "config-data") pod "edf21a9b-612c-4fa2-a439-5f05b11606bc" (UID: "edf21a9b-612c-4fa2-a439-5f05b11606bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.841897 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.841963 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfrxn\" (UniqueName: \"kubernetes.io/projected/edf21a9b-612c-4fa2-a439-5f05b11606bc-kube-api-access-pfrxn\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.841978 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.841988 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.842038 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.842178 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edf21a9b-612c-4fa2-a439-5f05b11606bc-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.842200 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.842350 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf21a9b-612c-4fa2-a439-5f05b11606bc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.880279 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 06:39:29 crc kubenswrapper[4955]: I1128 06:39:29.944037 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.017462 4955 generic.go:334] "Generic (PLEG): container finished" podID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerID="d58a523f0cf3ee48aca6ceda130fd2a84743df01a30addb7ecfd48a841af1c3f" exitCode=143 Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.017549 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerDied","Data":"d58a523f0cf3ee48aca6ceda130fd2a84743df01a30addb7ecfd48a841af1c3f"} Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.019064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6478fb8469-kzjkp" event={"ID":"8716e967-61aa-43b9-9d68-cb6699c5c673","Type":"ContainerStarted","Data":"eadc8866a85019b830541cbdd7f0ba201084145978de835a52e7e9e121e24eb7"} Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.021284 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.023943 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"edf21a9b-612c-4fa2-a439-5f05b11606bc","Type":"ContainerDied","Data":"2eb05fcedd984f6f35c1fbf6ffd504a2feb36bf4c927a35c30286cdffbd113cf"} Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.024028 4955 scope.go:117] "RemoveContainer" containerID="2e704c2b5886e7eefa3ae80885a63a19298b86603c4f888c9b2078ba91241f88" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.071575 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.081578 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.106788 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:30 crc kubenswrapper[4955]: E1128 06:39:30.107449 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-log" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.107468 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-log" Nov 28 06:39:30 crc kubenswrapper[4955]: E1128 06:39:30.107498 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-httpd" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.107516 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-httpd" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.107722 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-httpd" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.107751 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" containerName="glance-log" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.109005 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.113717 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.115696 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.116265 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.254690 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.254757 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.254871 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.254983 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.255062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.255119 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.255149 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mpkq\" (UniqueName: \"kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.255167 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357074 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357414 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mpkq\" (UniqueName: \"kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357444 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357481 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357537 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357611 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357657 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.357680 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.358171 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.358974 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.359575 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.362991 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.363247 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.364636 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.365209 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.391889 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mpkq\" (UniqueName: \"kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.394131 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.455332 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.849918 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:39:30 crc kubenswrapper[4955]: I1128 06:39:30.849978 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:39:31 crc kubenswrapper[4955]: I1128 06:39:31.066490 4955 generic.go:334] "Generic (PLEG): container finished" podID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerID="51bbeec0374ec4ab63e44f6776389e37691f9370d7fd70c3857fcc9a4f70e1d1" exitCode=0 Nov 28 06:39:31 crc kubenswrapper[4955]: I1128 06:39:31.066547 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerDied","Data":"51bbeec0374ec4ab63e44f6776389e37691f9370d7fd70c3857fcc9a4f70e1d1"} Nov 28 06:39:31 crc kubenswrapper[4955]: I1128 06:39:31.716298 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf21a9b-612c-4fa2-a439-5f05b11606bc" path="/var/lib/kubelet/pods/edf21a9b-612c-4fa2-a439-5f05b11606bc/volumes" Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.090525 4955 generic.go:334] "Generic (PLEG): container finished" podID="bddb6574-5273-410a-93aa-5293a16dfeba" containerID="9382abf70a398fe8d51cd5f1613349f40e94b7d715e98cdd1de46edc67f0a9ac" exitCode=0 Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.090913 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-44sl9" event={"ID":"bddb6574-5273-410a-93aa-5293a16dfeba","Type":"ContainerDied","Data":"9382abf70a398fe8d51cd5f1613349f40e94b7d715e98cdd1de46edc67f0a9ac"} Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.101222 4955 generic.go:334] "Generic (PLEG): container finished" podID="0a694432-dcc2-45d4-a492-f43f79169fc4" containerID="a5ed897781d3623593458af5cd707583a496e16dc664b58fcb2d35086cffc6f6" exitCode=0 Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.101284 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jqvkj" event={"ID":"0a694432-dcc2-45d4-a492-f43f79169fc4","Type":"ContainerDied","Data":"a5ed897781d3623593458af5cd707583a496e16dc664b58fcb2d35086cffc6f6"} Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.494268 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.557807 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.558068 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="dnsmasq-dns" containerID="cri-o://1d09310909f234e9400748e098724a349dcbf157ae5bad13c35aa2eca6a0010a" gracePeriod=10 Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.944415 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:33 crc kubenswrapper[4955]: I1128 06:39:33.944457 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.111432 4955 generic.go:334] "Generic (PLEG): container finished" podID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerID="1d09310909f234e9400748e098724a349dcbf157ae5bad13c35aa2eca6a0010a" exitCode=0 Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.111609 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" event={"ID":"aa661871-e4da-48a2-820b-3c5cec9e6ce0","Type":"ContainerDied","Data":"1d09310909f234e9400748e098724a349dcbf157ae5bad13c35aa2eca6a0010a"} Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.207918 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.208825 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.209671 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f45c5b6-nqg9b" podUID="0540bb1f-c904-4b07-acda-ce47d0bdfa7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 28 06:39:34 crc kubenswrapper[4955]: I1128 06:39:34.684682 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.055222 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.212280 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.458836 4955 scope.go:117] "RemoveContainer" containerID="963efd33a47443ec39e590c4e74cab10ee3ded240ba599ca5f9db5eb87e3f4c0" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.783754 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jqvkj" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.813528 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870012 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870353 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs\") pod \"0a694432-dcc2-45d4-a492-f43f79169fc4\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870410 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fckbh\" (UniqueName: \"kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870441 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle\") pod \"0a694432-dcc2-45d4-a492-f43f79169fc4\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870465 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870526 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870555 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870617 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870648 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data\") pod \"0a694432-dcc2-45d4-a492-f43f79169fc4\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870663 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzx7s\" (UniqueName: \"kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s\") pod \"0a694432-dcc2-45d4-a492-f43f79169fc4\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870695 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870723 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts\") pod \"0a694432-dcc2-45d4-a492-f43f79169fc4\" (UID: \"0a694432-dcc2-45d4-a492-f43f79169fc4\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870768 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs\") pod \"541ddc3e-f13a-4d19-9d07-5b1897c10957\" (UID: \"541ddc3e-f13a-4d19-9d07-5b1897c10957\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.870820 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs" (OuterVolumeSpecName: "logs") pod "0a694432-dcc2-45d4-a492-f43f79169fc4" (UID: "0a694432-dcc2-45d4-a492-f43f79169fc4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.871089 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a694432-dcc2-45d4-a492-f43f79169fc4-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.871752 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.875078 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.876201 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs" (OuterVolumeSpecName: "logs") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.881697 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh" (OuterVolumeSpecName: "kube-api-access-fckbh") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "kube-api-access-fckbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.881804 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.883448 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts" (OuterVolumeSpecName: "scripts") pod "0a694432-dcc2-45d4-a492-f43f79169fc4" (UID: "0a694432-dcc2-45d4-a492-f43f79169fc4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.884591 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts" (OuterVolumeSpecName: "scripts") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.886205 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s" (OuterVolumeSpecName: "kube-api-access-xzx7s") pod "0a694432-dcc2-45d4-a492-f43f79169fc4" (UID: "0a694432-dcc2-45d4-a492-f43f79169fc4"). InnerVolumeSpecName "kube-api-access-xzx7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.942525 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a694432-dcc2-45d4-a492-f43f79169fc4" (UID: "0a694432-dcc2-45d4-a492-f43f79169fc4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.959635 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data" (OuterVolumeSpecName: "config-data") pod "0a694432-dcc2-45d4-a492-f43f79169fc4" (UID: "0a694432-dcc2-45d4-a492-f43f79169fc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.963484 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.971876 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973547 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rhw8\" (UniqueName: \"kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973612 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973671 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973704 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973730 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzcqj\" (UniqueName: \"kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973754 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973783 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973824 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973860 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys\") pod \"bddb6574-5273-410a-93aa-5293a16dfeba\" (UID: \"bddb6574-5273-410a-93aa-5293a16dfeba\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973884 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.973952 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc\") pod \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\" (UID: \"aa661871-e4da-48a2-820b-3c5cec9e6ce0\") " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.974448 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.974470 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.974481 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.974494 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzx7s\" (UniqueName: \"kubernetes.io/projected/0a694432-dcc2-45d4-a492-f43f79169fc4-kube-api-access-xzx7s\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975523 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975615 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975669 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975727 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fckbh\" (UniqueName: \"kubernetes.io/projected/541ddc3e-f13a-4d19-9d07-5b1897c10957-kube-api-access-fckbh\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975796 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a694432-dcc2-45d4-a492-f43f79169fc4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:35 crc kubenswrapper[4955]: I1128 06:39:35.975849 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/541ddc3e-f13a-4d19-9d07-5b1897c10957-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.013859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.013909 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts" (OuterVolumeSpecName: "scripts") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.014550 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8" (OuterVolumeSpecName: "kube-api-access-7rhw8") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "kube-api-access-7rhw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.040240 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.040382 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj" (OuterVolumeSpecName: "kube-api-access-wzcqj") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "kube-api-access-wzcqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.042809 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.048576 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data" (OuterVolumeSpecName: "config-data") pod "541ddc3e-f13a-4d19-9d07-5b1897c10957" (UID: "541ddc3e-f13a-4d19-9d07-5b1897c10957"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.076985 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077009 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rhw8\" (UniqueName: \"kubernetes.io/projected/aa661871-e4da-48a2-820b-3c5cec9e6ce0-kube-api-access-7rhw8\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077019 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/541ddc3e-f13a-4d19-9d07-5b1897c10957-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077027 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077037 4955 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077044 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzcqj\" (UniqueName: \"kubernetes.io/projected/bddb6574-5273-410a-93aa-5293a16dfeba-kube-api-access-wzcqj\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.077052 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.081712 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.094903 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.112174 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.125728 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.128182 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data" (OuterVolumeSpecName: "config-data") pod "bddb6574-5273-410a-93aa-5293a16dfeba" (UID: "bddb6574-5273-410a-93aa-5293a16dfeba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.133531 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-44sl9" event={"ID":"bddb6574-5273-410a-93aa-5293a16dfeba","Type":"ContainerDied","Data":"7ba094ff2b67e08488c0046c851ef17e78856735b0be9e2f832f4f7f4ee39b98"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.133570 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba094ff2b67e08488c0046c851ef17e78856735b0be9e2f832f4f7f4ee39b98" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.133643 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-44sl9" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.134906 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config" (OuterVolumeSpecName: "config") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.135763 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.144298 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerStarted","Data":"7ab072dde9fa67d649689612cb43ec013273f4118a344bf58f8f008a2bd2586e"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.150206 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jqvkj" event={"ID":"0a694432-dcc2-45d4-a492-f43f79169fc4","Type":"ContainerDied","Data":"78ae2e57c99b20d15d7eab71487e5d54a6d07cd924b97d3ad819a5e56dac9917"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.150244 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ae2e57c99b20d15d7eab71487e5d54a6d07cd924b97d3ad819a5e56dac9917" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.150311 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jqvkj" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.152692 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" event={"ID":"aa661871-e4da-48a2-820b-3c5cec9e6ce0","Type":"ContainerDied","Data":"9878bdd1f675e102456171b3f132ac7ebd3baa4c08882bb665f191575ab874ff"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.152750 4955 scope.go:117] "RemoveContainer" containerID="1d09310909f234e9400748e098724a349dcbf157ae5bad13c35aa2eca6a0010a" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.152868 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-f98v8" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.156873 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aa661871-e4da-48a2-820b-3c5cec9e6ce0" (UID: "aa661871-e4da-48a2-820b-3c5cec9e6ce0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.157540 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"541ddc3e-f13a-4d19-9d07-5b1897c10957","Type":"ContainerDied","Data":"b2dbcdc49d7569e04666889acd886d64b0c53701a811f080393c1ffe8a25e012"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.157784 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.161705 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6478fb8469-kzjkp" event={"ID":"8716e967-61aa-43b9-9d68-cb6699c5c673","Type":"ContainerStarted","Data":"a1ebb8132e5a9e6b74c9e18628f264568b14b35e42e915e2acb4acfd1274e059"} Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.161908 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179120 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179152 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179163 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179171 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179181 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179190 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179199 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aa661871-e4da-48a2-820b-3c5cec9e6ce0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.179207 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddb6574-5273-410a-93aa-5293a16dfeba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.180295 4955 scope.go:117] "RemoveContainer" containerID="56c4ec81304643b558d8f0c823d25005e96d7610937daba4c458f597836edaf4" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.194850 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6478fb8469-kzjkp" podStartSLOduration=11.19483253 podStartE2EDuration="11.19483253s" podCreationTimestamp="2025-11-28 06:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:36.185260598 +0000 UTC m=+1098.774516168" watchObservedRunningTime="2025-11-28 06:39:36.19483253 +0000 UTC m=+1098.784088100" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.223568 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.227093 4955 scope.go:117] "RemoveContainer" containerID="51bbeec0374ec4ab63e44f6776389e37691f9370d7fd70c3857fcc9a4f70e1d1" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.233407 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.251809 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252241 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-httpd" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252264 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-httpd" Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252278 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddb6574-5273-410a-93aa-5293a16dfeba" containerName="keystone-bootstrap" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252287 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddb6574-5273-410a-93aa-5293a16dfeba" containerName="keystone-bootstrap" Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252305 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a694432-dcc2-45d4-a492-f43f79169fc4" containerName="placement-db-sync" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252313 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a694432-dcc2-45d4-a492-f43f79169fc4" containerName="placement-db-sync" Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252323 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-log" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252332 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-log" Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252359 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="init" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252366 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="init" Nov 28 06:39:36 crc kubenswrapper[4955]: E1128 06:39:36.252379 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="dnsmasq-dns" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252386 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="dnsmasq-dns" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.252749 4955 scope.go:117] "RemoveContainer" containerID="d58a523f0cf3ee48aca6ceda130fd2a84743df01a30addb7ecfd48a841af1c3f" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.264915 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddb6574-5273-410a-93aa-5293a16dfeba" containerName="keystone-bootstrap" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.264940 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a694432-dcc2-45d4-a492-f43f79169fc4" containerName="placement-db-sync" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.264962 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-log" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.264994 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" containerName="glance-httpd" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.265011 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" containerName="dnsmasq-dns" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.265871 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.265944 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.275256 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.275493 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.361934 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.387740 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92qd4\" (UniqueName: \"kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.387786 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.387917 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.387968 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.388087 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.388110 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.388141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.388178 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92qd4\" (UniqueName: \"kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489548 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489606 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489625 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489661 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489677 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489692 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.489713 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.490114 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.490591 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.491536 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.497555 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.504190 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.505087 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.507386 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.508388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92qd4\" (UniqueName: \"kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.553761 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.613083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.634366 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.642890 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-f98v8"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.907030 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d446689d4-fvjm6"] Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.908649 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.911579 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.911725 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.911881 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-psgf7" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.911987 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.912108 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 06:39:36 crc kubenswrapper[4955]: I1128 06:39:36.945629 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d446689d4-fvjm6"] Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.001600 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-766d6648f9-vfvxt"] Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.002854 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.009288 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d6648f9-vfvxt"] Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.009849 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.010134 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.010274 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.010400 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xt79j" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.010485 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.011205 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019366 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-public-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019415 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-scripts\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019442 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-internal-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019530 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktxq\" (UniqueName: \"kubernetes.io/projected/0a8c9e11-5611-4739-9a2c-24ad016682c0-kube-api-access-vktxq\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019586 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-config-data\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019611 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-combined-ca-bundle\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.019656 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8c9e11-5611-4739-9a2c-24ad016682c0-logs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121259 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-combined-ca-bundle\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121309 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-credential-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121333 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-config-data\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121360 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-combined-ca-bundle\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121379 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-public-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121401 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8c9e11-5611-4739-9a2c-24ad016682c0-logs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121424 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-scripts\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121442 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-fernet-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121462 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-config-data\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121478 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-public-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121496 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-scripts\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121532 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-internal-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121574 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845nz\" (UniqueName: \"kubernetes.io/projected/374d1f5d-9bd1-4362-a245-97f658097965-kube-api-access-845nz\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121607 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktxq\" (UniqueName: \"kubernetes.io/projected/0a8c9e11-5611-4739-9a2c-24ad016682c0-kube-api-access-vktxq\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.121638 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-internal-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.122877 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8c9e11-5611-4739-9a2c-24ad016682c0-logs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.126224 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-config-data\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.126634 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-scripts\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.127278 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-public-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.128880 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-internal-tls-certs\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.130100 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8c9e11-5611-4739-9a2c-24ad016682c0-combined-ca-bundle\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.147149 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktxq\" (UniqueName: \"kubernetes.io/projected/0a8c9e11-5611-4739-9a2c-24ad016682c0-kube-api-access-vktxq\") pod \"placement-6d446689d4-fvjm6\" (UID: \"0a8c9e11-5611-4739-9a2c-24ad016682c0\") " pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.193581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerStarted","Data":"5537610aa815d2e73a9e59b4758e0ca9bf904f91db9bfd1be6cebbe76d6bcc0f"} Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.223423 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845nz\" (UniqueName: \"kubernetes.io/projected/374d1f5d-9bd1-4362-a245-97f658097965-kube-api-access-845nz\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.223495 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-internal-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.223558 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-combined-ca-bundle\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.224641 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-credential-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.224774 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-public-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.224861 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-scripts\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.224902 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-fernet-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.224944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-config-data\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.229247 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-credential-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.229629 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-internal-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.232569 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.234682 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-fernet-keys\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.234993 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-scripts\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.237993 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-public-tls-certs\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.242094 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-combined-ca-bundle\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.243024 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374d1f5d-9bd1-4362-a245-97f658097965-config-data\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.245997 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845nz\" (UniqueName: \"kubernetes.io/projected/374d1f5d-9bd1-4362-a245-97f658097965-kube-api-access-845nz\") pod \"keystone-766d6648f9-vfvxt\" (UID: \"374d1f5d-9bd1-4362-a245-97f658097965\") " pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.289893 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.320498 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.574931 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d446689d4-fvjm6"] Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.715376 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541ddc3e-f13a-4d19-9d07-5b1897c10957" path="/var/lib/kubelet/pods/541ddc3e-f13a-4d19-9d07-5b1897c10957/volumes" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.716239 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa661871-e4da-48a2-820b-3c5cec9e6ce0" path="/var/lib/kubelet/pods/aa661871-e4da-48a2-820b-3c5cec9e6ce0/volumes" Nov 28 06:39:37 crc kubenswrapper[4955]: I1128 06:39:37.849433 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d6648f9-vfvxt"] Nov 28 06:39:37 crc kubenswrapper[4955]: W1128 06:39:37.853645 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod374d1f5d_9bd1_4362_a245_97f658097965.slice/crio-b3871b52b6d205f55d2307567f5c5db853c41860985cd8b1532cd70eaca2dce3 WatchSource:0}: Error finding container b3871b52b6d205f55d2307567f5c5db853c41860985cd8b1532cd70eaca2dce3: Status 404 returned error can't find the container with id b3871b52b6d205f55d2307567f5c5db853c41860985cd8b1532cd70eaca2dce3 Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.216016 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d446689d4-fvjm6" event={"ID":"0a8c9e11-5611-4739-9a2c-24ad016682c0","Type":"ContainerStarted","Data":"38b97d4edc6f55a4d45dfc61c5e49036b8b9d82de7e91019330b2994700f3d18"} Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.216064 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d446689d4-fvjm6" event={"ID":"0a8c9e11-5611-4739-9a2c-24ad016682c0","Type":"ContainerStarted","Data":"0377e777f9218c391147dab980e1079248f747c2ea5559722e8381be65b3f2d0"} Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.224608 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerStarted","Data":"d6f084fb3be406b3cfb2874cf1b7986614b2594cb7847501b9fcf7eec3d9a640"} Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.224648 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerStarted","Data":"c16dad1f89126d44dfe03e59d9908d81ed918475b03bdd78dcfc01052b9f88e4"} Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.227672 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d6648f9-vfvxt" event={"ID":"374d1f5d-9bd1-4362-a245-97f658097965","Type":"ContainerStarted","Data":"b3871b52b6d205f55d2307567f5c5db853c41860985cd8b1532cd70eaca2dce3"} Nov 28 06:39:38 crc kubenswrapper[4955]: I1128 06:39:38.237201 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerStarted","Data":"e569e662d4ece535c24464715ef22ed5cdd2946a6cffacb42b78354b85d10af1"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.252499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d6648f9-vfvxt" event={"ID":"374d1f5d-9bd1-4362-a245-97f658097965","Type":"ContainerStarted","Data":"68009fc3163e8844ff1f7992993b8044f6d685d6b6470d01ace9ef8d9964a9ed"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.252835 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.256204 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerStarted","Data":"349206e3158d352a88674cc03d7ee8e0af33b899d8b967adc29c619293006bb5"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.256238 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerStarted","Data":"2791ed070156d863d7468ff0af4b05b241b6873365d1545322d7b4f9d648d74e"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.260045 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7lw7" event={"ID":"0ffeda94-da23-484b-b623-fe3101c66890","Type":"ContainerStarted","Data":"29b5074a5ec47e9df1b5a1ec376b9b74bd8e66bae0e08f34fe8c5df01f665d85"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.264262 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d446689d4-fvjm6" event={"ID":"0a8c9e11-5611-4739-9a2c-24ad016682c0","Type":"ContainerStarted","Data":"4fd4172e3a25ea5dda2cc0532a44d41ac11063251aa87c65821d7087b09f01e3"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.264389 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.266097 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d4bdh" event={"ID":"c39c6827-9dc3-482d-a268-8ba9348b925e","Type":"ContainerStarted","Data":"e9bdb652f84a64351bdc8fbca14f55c3a9a4c49d969670a28d027efe545304fe"} Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.277784 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-766d6648f9-vfvxt" podStartSLOduration=3.2777416600000002 podStartE2EDuration="3.27774166s" podCreationTimestamp="2025-11-28 06:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:39.267498838 +0000 UTC m=+1101.856754418" watchObservedRunningTime="2025-11-28 06:39:39.27774166 +0000 UTC m=+1101.866997230" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.300704 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.300683453 podStartE2EDuration="9.300683453s" podCreationTimestamp="2025-11-28 06:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:39.295268969 +0000 UTC m=+1101.884524569" watchObservedRunningTime="2025-11-28 06:39:39.300683453 +0000 UTC m=+1101.889939023" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.326789 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-d4bdh" podStartSLOduration=3.010643626 podStartE2EDuration="45.326766725s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="2025-11-28 06:38:56.028658169 +0000 UTC m=+1058.617913739" lastFinishedPulling="2025-11-28 06:39:38.344781268 +0000 UTC m=+1100.934036838" observedRunningTime="2025-11-28 06:39:39.31568235 +0000 UTC m=+1101.904937920" watchObservedRunningTime="2025-11-28 06:39:39.326766725 +0000 UTC m=+1101.916022295" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.371484 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.371418626 podStartE2EDuration="3.371418626s" podCreationTimestamp="2025-11-28 06:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:39.36310857 +0000 UTC m=+1101.952364140" watchObservedRunningTime="2025-11-28 06:39:39.371418626 +0000 UTC m=+1101.960674216" Nov 28 06:39:39 crc kubenswrapper[4955]: I1128 06:39:39.376983 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-h7lw7" podStartSLOduration=3.568557689 podStartE2EDuration="45.376970925s" podCreationTimestamp="2025-11-28 06:38:54 +0000 UTC" firstStartedPulling="2025-11-28 06:38:56.704557621 +0000 UTC m=+1059.293813191" lastFinishedPulling="2025-11-28 06:39:38.512970857 +0000 UTC m=+1101.102226427" observedRunningTime="2025-11-28 06:39:39.33994136 +0000 UTC m=+1101.929196960" watchObservedRunningTime="2025-11-28 06:39:39.376970925 +0000 UTC m=+1101.966226495" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.274832 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.456345 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.456404 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.498202 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.505042 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:40 crc kubenswrapper[4955]: I1128 06:39:40.519781 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d446689d4-fvjm6" podStartSLOduration=4.5197687 podStartE2EDuration="4.5197687s" podCreationTimestamp="2025-11-28 06:39:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:39.402747548 +0000 UTC m=+1101.992003138" watchObservedRunningTime="2025-11-28 06:39:40.5197687 +0000 UTC m=+1103.109024270" Nov 28 06:39:41 crc kubenswrapper[4955]: I1128 06:39:41.285803 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:41 crc kubenswrapper[4955]: I1128 06:39:41.285963 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:42 crc kubenswrapper[4955]: I1128 06:39:42.296226 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ffeda94-da23-484b-b623-fe3101c66890" containerID="29b5074a5ec47e9df1b5a1ec376b9b74bd8e66bae0e08f34fe8c5df01f665d85" exitCode=0 Nov 28 06:39:42 crc kubenswrapper[4955]: I1128 06:39:42.296303 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7lw7" event={"ID":"0ffeda94-da23-484b-b623-fe3101c66890","Type":"ContainerDied","Data":"29b5074a5ec47e9df1b5a1ec376b9b74bd8e66bae0e08f34fe8c5df01f665d85"} Nov 28 06:39:43 crc kubenswrapper[4955]: I1128 06:39:43.945060 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 28 06:39:44 crc kubenswrapper[4955]: I1128 06:39:44.207812 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f45c5b6-nqg9b" podUID="0540bb1f-c904-4b07-acda-ce47d0bdfa7c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Nov 28 06:39:44 crc kubenswrapper[4955]: I1128 06:39:44.403379 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.331470 4955 generic.go:334] "Generic (PLEG): container finished" podID="c39c6827-9dc3-482d-a268-8ba9348b925e" containerID="e9bdb652f84a64351bdc8fbca14f55c3a9a4c49d969670a28d027efe545304fe" exitCode=0 Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.331543 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d4bdh" event={"ID":"c39c6827-9dc3-482d-a268-8ba9348b925e","Type":"ContainerDied","Data":"e9bdb652f84a64351bdc8fbca14f55c3a9a4c49d969670a28d027efe545304fe"} Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.755074 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.942564 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data\") pod \"0ffeda94-da23-484b-b623-fe3101c66890\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.942666 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle\") pod \"0ffeda94-da23-484b-b623-fe3101c66890\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.942695 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6z4f\" (UniqueName: \"kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f\") pod \"0ffeda94-da23-484b-b623-fe3101c66890\" (UID: \"0ffeda94-da23-484b-b623-fe3101c66890\") " Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.952630 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0ffeda94-da23-484b-b623-fe3101c66890" (UID: "0ffeda94-da23-484b-b623-fe3101c66890"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.957873 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f" (OuterVolumeSpecName: "kube-api-access-m6z4f") pod "0ffeda94-da23-484b-b623-fe3101c66890" (UID: "0ffeda94-da23-484b-b623-fe3101c66890"). InnerVolumeSpecName "kube-api-access-m6z4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:45 crc kubenswrapper[4955]: I1128 06:39:45.983969 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ffeda94-da23-484b-b623-fe3101c66890" (UID: "0ffeda94-da23-484b-b623-fe3101c66890"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.045483 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.045525 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ffeda94-da23-484b-b623-fe3101c66890-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.045535 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6z4f\" (UniqueName: \"kubernetes.io/projected/0ffeda94-da23-484b-b623-fe3101c66890-kube-api-access-m6z4f\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.347724 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-h7lw7" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.348134 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-h7lw7" event={"ID":"0ffeda94-da23-484b-b623-fe3101c66890","Type":"ContainerDied","Data":"f5ac7dc7b4f4e599493322f9a8ff36837b60882118758548ff0042092d9644f2"} Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.348178 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5ac7dc7b4f4e599493322f9a8ff36837b60882118758548ff0042092d9644f2" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.367444 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.614570 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.614894 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.682018 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.684850 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 06:39:46 crc kubenswrapper[4955]: E1128 06:39:46.733782 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.843234 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963642 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963771 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963803 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963843 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963883 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc58h\" (UniqueName: \"kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.963942 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id\") pod \"c39c6827-9dc3-482d-a268-8ba9348b925e\" (UID: \"c39c6827-9dc3-482d-a268-8ba9348b925e\") " Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.964377 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.977863 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h" (OuterVolumeSpecName: "kube-api-access-pc58h") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "kube-api-access-pc58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.978255 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:46 crc kubenswrapper[4955]: I1128 06:39:46.983763 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts" (OuterVolumeSpecName: "scripts") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.038673 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.067172 4955 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.067209 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.067219 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.067227 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc58h\" (UniqueName: \"kubernetes.io/projected/c39c6827-9dc3-482d-a268-8ba9348b925e-kube-api-access-pc58h\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.067237 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c39c6827-9dc3-482d-a268-8ba9348b925e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.078906 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data" (OuterVolumeSpecName: "config-data") pod "c39c6827-9dc3-482d-a268-8ba9348b925e" (UID: "c39c6827-9dc3-482d-a268-8ba9348b925e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.108922 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6985c74bc8-qgvjf"] Nov 28 06:39:47 crc kubenswrapper[4955]: E1128 06:39:47.109323 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ffeda94-da23-484b-b623-fe3101c66890" containerName="barbican-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.109339 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ffeda94-da23-484b-b623-fe3101c66890" containerName="barbican-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: E1128 06:39:47.109352 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" containerName="cinder-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.109359 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" containerName="cinder-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.109538 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ffeda94-da23-484b-b623-fe3101c66890" containerName="barbican-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.109559 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" containerName="cinder-db-sync" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.110412 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.116329 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.116600 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-lrkrv" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.116638 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.120109 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5dfcd47cfc-s75nx"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.121549 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.124301 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.138034 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6985c74bc8-qgvjf"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.159562 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5dfcd47cfc-s75nx"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.171471 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c39c6827-9dc3-482d-a268-8ba9348b925e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.192415 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.193918 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.225609 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273370 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273418 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-combined-ca-bundle\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273484 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data-custom\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273558 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30cb01b-f625-4031-98a0-272f85d43a81-logs\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273600 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273640 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24535783-21c6-4550-965e-7fd84038058b-logs\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273643 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273683 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273698 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273724 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-combined-ca-bundle\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273740 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9pml\" (UniqueName: \"kubernetes.io/projected/24535783-21c6-4550-965e-7fd84038058b-kube-api-access-z9pml\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273760 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data-custom\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h87lc\" (UniqueName: \"kubernetes.io/projected/f30cb01b-f625-4031-98a0-272f85d43a81-kube-api-access-h87lc\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273795 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.273812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhss\" (UniqueName: \"kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.275894 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.282797 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.286187 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.362636 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-d4bdh" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.362632 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-d4bdh" event={"ID":"c39c6827-9dc3-482d-a268-8ba9348b925e","Type":"ContainerDied","Data":"58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771"} Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.362750 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58d870d4f8f5a22a3d6945f65a967454a291beab2112f3406f36301158b5d771" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.368027 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="ceilometer-notification-agent" containerID="cri-o://c51d62fb4c5ee6832d2e95611a72db7923d2f0d61cb80a135e6e343c7c7fa2db" gracePeriod=30 Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369002 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerStarted","Data":"c060893e7a4c8124aefc06bfdea3d87482b132e20cea2c71b6f85558898f1b42"} Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369052 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369624 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="proxy-httpd" containerID="cri-o://c060893e7a4c8124aefc06bfdea3d87482b132e20cea2c71b6f85558898f1b42" gracePeriod=30 Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369706 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="sg-core" containerID="cri-o://7ab072dde9fa67d649689612cb43ec013273f4118a344bf58f8f008a2bd2586e" gracePeriod=30 Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369769 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.369782 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382256 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382296 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24535783-21c6-4550-965e-7fd84038058b-logs\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382343 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382390 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382411 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382439 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-combined-ca-bundle\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382455 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9pml\" (UniqueName: \"kubernetes.io/projected/24535783-21c6-4550-965e-7fd84038058b-kube-api-access-z9pml\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382478 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data-custom\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382609 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h87lc\" (UniqueName: \"kubernetes.io/projected/f30cb01b-f625-4031-98a0-272f85d43a81-kube-api-access-h87lc\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382630 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382648 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhss\" (UniqueName: \"kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382730 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382749 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-combined-ca-bundle\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382769 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382808 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwks\" (UniqueName: \"kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382833 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382858 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382876 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data-custom\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382893 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382909 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30cb01b-f625-4031-98a0-272f85d43a81-logs\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.382944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.384582 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.384859 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24535783-21c6-4550-965e-7fd84038058b-logs\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.385456 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.388277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.394039 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30cb01b-f625-4031-98a0-272f85d43a81-logs\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.395100 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.395364 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.393882 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-combined-ca-bundle\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.398384 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.399441 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-combined-ca-bundle\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.400205 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30cb01b-f625-4031-98a0-272f85d43a81-config-data-custom\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.400645 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data-custom\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.414822 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9pml\" (UniqueName: \"kubernetes.io/projected/24535783-21c6-4550-965e-7fd84038058b-kube-api-access-z9pml\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.417477 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhss\" (UniqueName: \"kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss\") pod \"dnsmasq-dns-75c8ddd69c-7w77m\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.418601 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24535783-21c6-4550-965e-7fd84038058b-config-data\") pod \"barbican-worker-5dfcd47cfc-s75nx\" (UID: \"24535783-21c6-4550-965e-7fd84038058b\") " pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.445102 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h87lc\" (UniqueName: \"kubernetes.io/projected/f30cb01b-f625-4031-98a0-272f85d43a81-kube-api-access-h87lc\") pod \"barbican-keystone-listener-6985c74bc8-qgvjf\" (UID: \"f30cb01b-f625-4031-98a0-272f85d43a81\") " pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.462616 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484180 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484358 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484391 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwks\" (UniqueName: \"kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484442 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484466 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.484657 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.489334 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.497412 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.503695 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.517719 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.529654 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwks\" (UniqueName: \"kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks\") pod \"barbican-api-5fb779f59d-wm9cr\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.601665 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.603976 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.606308 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.606968 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.607213 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.607864 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7dntf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.610923 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.614597 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.662707 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692631 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vctsc\" (UniqueName: \"kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692705 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692731 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692765 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692823 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.692838 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.741028 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.784638 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.786161 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.786315 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794066 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vctsc\" (UniqueName: \"kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794113 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794135 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794168 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.794237 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.800674 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.803626 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.811161 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.812182 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.819136 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.840751 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vctsc\" (UniqueName: \"kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc\") pod \"cinder-scheduler-0\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901308 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901466 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901526 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tj7d\" (UniqueName: \"kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901615 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.901681 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.965130 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.975988 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.980467 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:47 crc kubenswrapper[4955]: I1128 06:39:47.983547 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.001248 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003607 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003694 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003734 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tj7d\" (UniqueName: \"kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003779 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.003879 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.004993 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.006896 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.010640 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.011997 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.013322 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.033284 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tj7d\" (UniqueName: \"kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d\") pod \"dnsmasq-dns-5784cf869f-2cdsc\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.105989 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106045 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106077 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106107 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106193 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcbz\" (UniqueName: \"kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.106209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.208831 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.208882 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.208928 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.209314 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.209350 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.209387 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.209410 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbcbz\" (UniqueName: \"kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.209425 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.213068 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.219766 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.220149 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.221440 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.226554 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.234680 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.244144 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbcbz\" (UniqueName: \"kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz\") pod \"cinder-api-0\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.249375 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.270758 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5dfcd47cfc-s75nx"] Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.406640 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" event={"ID":"24535783-21c6-4550-965e-7fd84038058b","Type":"ContainerStarted","Data":"a6e094fcbf8e5864bf9ad2ae27bbf0148ab7e34350ae7bdee5b859d6fac7df6e"} Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.417131 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" event={"ID":"232d06a5-6f80-483b-8735-8c51b46df85d","Type":"ContainerStarted","Data":"db91fef3c39544ca57848b6859b8216e147aac66532f7983d661c1d48ab1096c"} Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.417271 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.418402 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.426208 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6985c74bc8-qgvjf"] Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.428774 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerID="c060893e7a4c8124aefc06bfdea3d87482b132e20cea2c71b6f85558898f1b42" exitCode=0 Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.428805 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerID="7ab072dde9fa67d649689612cb43ec013273f4118a344bf58f8f008a2bd2586e" exitCode=2 Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.428943 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerDied","Data":"c060893e7a4c8124aefc06bfdea3d87482b132e20cea2c71b6f85558898f1b42"} Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.428971 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerDied","Data":"7ab072dde9fa67d649689612cb43ec013273f4118a344bf58f8f008a2bd2586e"} Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.677662 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:39:48 crc kubenswrapper[4955]: W1128 06:39:48.748778 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e953827_0dd0_4148_81de_bfb2ab8942bd.slice/crio-3007d8ac2fda9b6cc09d3c40bece788f4d21d5996230c299ae5138357e9d1b25 WatchSource:0}: Error finding container 3007d8ac2fda9b6cc09d3c40bece788f4d21d5996230c299ae5138357e9d1b25: Status 404 returned error can't find the container with id 3007d8ac2fda9b6cc09d3c40bece788f4d21d5996230c299ae5138357e9d1b25 Nov 28 06:39:48 crc kubenswrapper[4955]: I1128 06:39:48.902233 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.145578 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.507153 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c66787d-973a-4cfa-8cde-75249495ec65" containerID="2347d7cd31267602455505c48197a991eb6eb1b5439a112df740581505ba3565" exitCode=0 Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.507402 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" event={"ID":"7c66787d-973a-4cfa-8cde-75249495ec65","Type":"ContainerDied","Data":"2347d7cd31267602455505c48197a991eb6eb1b5439a112df740581505ba3565"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.507426 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" event={"ID":"7c66787d-973a-4cfa-8cde-75249495ec65","Type":"ContainerStarted","Data":"49a0d25408f27cd8450cf7c74b8a532bf1f145641db3c2e55bbcf26eda1b5672"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.517937 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerStarted","Data":"1712c9c1d6ab3ff40c3ff583338abb2ae2d9ed67e47719cad0c670dc0fac4be4"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.517985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerStarted","Data":"7ea2f9d31a77702ff5507990570e7f9bbabe6738f2f4ff05f57f5af18dde1f64"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.527941 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" event={"ID":"f30cb01b-f625-4031-98a0-272f85d43a81","Type":"ContainerStarted","Data":"eb202cb908a3b707d1087ff3a042d261369c0a438a63e625eda5355ee82fea14"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.535070 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerStarted","Data":"3007d8ac2fda9b6cc09d3c40bece788f4d21d5996230c299ae5138357e9d1b25"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.552684 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerStarted","Data":"3ef7b0f12d866c784cd979d2c1dfccdb44ee9ec3e4cdd725297f1cea57d82b18"} Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.560744 4955 generic.go:334] "Generic (PLEG): container finished" podID="232d06a5-6f80-483b-8735-8c51b46df85d" containerID="2c82b3f4efa96b38494fdaf8fb7ca732d70142d3332fe6a7b5e2578d2c47da7a" exitCode=0 Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.560844 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.560853 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:39:49 crc kubenswrapper[4955]: I1128 06:39:49.560921 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" event={"ID":"232d06a5-6f80-483b-8735-8c51b46df85d","Type":"ContainerDied","Data":"2c82b3f4efa96b38494fdaf8fb7ca732d70142d3332fe6a7b5e2578d2c47da7a"} Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.130133 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312266 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312386 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312416 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312484 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312627 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.312688 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxhss\" (UniqueName: \"kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss\") pod \"232d06a5-6f80-483b-8735-8c51b46df85d\" (UID: \"232d06a5-6f80-483b-8735-8c51b46df85d\") " Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.318303 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss" (OuterVolumeSpecName: "kube-api-access-zxhss") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "kube-api-access-zxhss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.362872 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.374905 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config" (OuterVolumeSpecName: "config") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.393072 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.416405 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxhss\" (UniqueName: \"kubernetes.io/projected/232d06a5-6f80-483b-8735-8c51b46df85d-kube-api-access-zxhss\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.416447 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.416461 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.416472 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.419928 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.433556 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "232d06a5-6f80-483b-8735-8c51b46df85d" (UID: "232d06a5-6f80-483b-8735-8c51b46df85d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.453808 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.521664 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.521697 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/232d06a5-6f80-483b-8735-8c51b46df85d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.542650 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.577383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" event={"ID":"7c66787d-973a-4cfa-8cde-75249495ec65","Type":"ContainerStarted","Data":"a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c"} Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.578130 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.596352 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerStarted","Data":"e17470f6dab2e167f7f7013223651e42e2e959bec2ae84c35b28ac4b3b12b575"} Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.597395 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.597427 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.600353 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerStarted","Data":"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b"} Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.601827 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" podStartSLOduration=3.601808711 podStartE2EDuration="3.601808711s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:50.595246402 +0000 UTC m=+1113.184501982" watchObservedRunningTime="2025-11-28 06:39:50.601808711 +0000 UTC m=+1113.191064281" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.603298 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.603833 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.606086 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75c8ddd69c-7w77m" event={"ID":"232d06a5-6f80-483b-8735-8c51b46df85d","Type":"ContainerDied","Data":"db91fef3c39544ca57848b6859b8216e147aac66532f7983d661c1d48ab1096c"} Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.606136 4955 scope.go:117] "RemoveContainer" containerID="2c82b3f4efa96b38494fdaf8fb7ca732d70142d3332fe6a7b5e2578d2c47da7a" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.624457 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fb779f59d-wm9cr" podStartSLOduration=3.6244434070000002 podStartE2EDuration="3.624443407s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:50.62382927 +0000 UTC m=+1113.213084840" watchObservedRunningTime="2025-11-28 06:39:50.624443407 +0000 UTC m=+1113.213698977" Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.680349 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.724733 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75c8ddd69c-7w77m"] Nov 28 06:39:50 crc kubenswrapper[4955]: I1128 06:39:50.878919 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.620006 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerStarted","Data":"5e657792d797d8dd80d3389a6ce634a026a69a2f7dad6b338591a9238374bf5b"} Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.623239 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerStarted","Data":"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1"} Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.623399 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api-log" containerID="cri-o://71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" gracePeriod=30 Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.623455 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.623477 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api" containerID="cri-o://7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" gracePeriod=30 Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.651491 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.651471671 podStartE2EDuration="4.651471671s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:51.644911312 +0000 UTC m=+1114.234166892" watchObservedRunningTime="2025-11-28 06:39:51.651471671 +0000 UTC m=+1114.240727241" Nov 28 06:39:51 crc kubenswrapper[4955]: I1128 06:39:51.714983 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232d06a5-6f80-483b-8735-8c51b46df85d" path="/var/lib/kubelet/pods/232d06a5-6f80-483b-8735-8c51b46df85d/volumes" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.216557 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.357764 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.358691 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.358793 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbcbz\" (UniqueName: \"kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.358823 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.358844 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.358894 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.359001 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data\") pod \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\" (UID: \"ba3f0f19-5fce-4278-b609-084ca27e4dfc\") " Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.361577 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.362424 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs" (OuterVolumeSpecName: "logs") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.363817 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.368286 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz" (OuterVolumeSpecName: "kube-api-access-qbcbz") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "kube-api-access-qbcbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.368310 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts" (OuterVolumeSpecName: "scripts") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.402247 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461203 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461240 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461250 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbcbz\" (UniqueName: \"kubernetes.io/projected/ba3f0f19-5fce-4278-b609-084ca27e4dfc-kube-api-access-qbcbz\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461260 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461271 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ba3f0f19-5fce-4278-b609-084ca27e4dfc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.461279 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba3f0f19-5fce-4278-b609-084ca27e4dfc-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.462976 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data" (OuterVolumeSpecName: "config-data") pod "ba3f0f19-5fce-4278-b609-084ca27e4dfc" (UID: "ba3f0f19-5fce-4278-b609-084ca27e4dfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.572470 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba3f0f19-5fce-4278-b609-084ca27e4dfc-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.655244 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerStarted","Data":"4f148f34a0128f06ebc544ccfd223e6a40008e6511a072cd56adf5d514a80fd8"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669794 4955 generic.go:334] "Generic (PLEG): container finished" podID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerID="7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" exitCode=0 Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669840 4955 generic.go:334] "Generic (PLEG): container finished" podID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerID="71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" exitCode=143 Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669909 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerDied","Data":"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669937 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerDied","Data":"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669946 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ba3f0f19-5fce-4278-b609-084ca27e4dfc","Type":"ContainerDied","Data":"3ef7b0f12d866c784cd979d2c1dfccdb44ee9ec3e4cdd725297f1cea57d82b18"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.669962 4955 scope.go:117] "RemoveContainer" containerID="7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.670099 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.685419 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.749077337 podStartE2EDuration="5.685407304s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="2025-11-28 06:39:48.757109419 +0000 UTC m=+1111.346364989" lastFinishedPulling="2025-11-28 06:39:49.693439376 +0000 UTC m=+1112.282694956" observedRunningTime="2025-11-28 06:39:52.678121265 +0000 UTC m=+1115.267376835" watchObservedRunningTime="2025-11-28 06:39:52.685407304 +0000 UTC m=+1115.274662874" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.688831 4955 generic.go:334] "Generic (PLEG): container finished" podID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerID="c51d62fb4c5ee6832d2e95611a72db7923d2f0d61cb80a135e6e343c7c7fa2db" exitCode=0 Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.688911 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerDied","Data":"c51d62fb4c5ee6832d2e95611a72db7923d2f0d61cb80a135e6e343c7c7fa2db"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.702329 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" event={"ID":"f30cb01b-f625-4031-98a0-272f85d43a81","Type":"ContainerStarted","Data":"833402fc9333c71bf62bd77f936d8403338eaec40b93ab08936563498be078d6"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.702392 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" event={"ID":"f30cb01b-f625-4031-98a0-272f85d43a81","Type":"ContainerStarted","Data":"368a7063b71df542c33c90183cf870d5117fbb4b11d88a945f97f86449b36871"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.704793 4955 scope.go:117] "RemoveContainer" containerID="71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.740728 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.746241 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" event={"ID":"24535783-21c6-4550-965e-7fd84038058b","Type":"ContainerStarted","Data":"38f184964ddad4cd92409886b95fe55337b297bb9d57c54c728d432f9710573d"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.746281 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" event={"ID":"24535783-21c6-4550-965e-7fd84038058b","Type":"ContainerStarted","Data":"072816585c678edf53daa4b06a866f01865d86eeff3db5a9fc6fd773fa148fcb"} Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.763997 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.771909 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6985c74bc8-qgvjf" podStartSLOduration=2.419041297 podStartE2EDuration="5.771892398s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="2025-11-28 06:39:48.514018103 +0000 UTC m=+1111.103273663" lastFinishedPulling="2025-11-28 06:39:51.866869194 +0000 UTC m=+1114.456124764" observedRunningTime="2025-11-28 06:39:52.746007883 +0000 UTC m=+1115.335263453" watchObservedRunningTime="2025-11-28 06:39:52.771892398 +0000 UTC m=+1115.361147968" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.784375 4955 scope.go:117] "RemoveContainer" containerID="7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" Nov 28 06:39:52 crc kubenswrapper[4955]: E1128 06:39:52.786033 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1\": container with ID starting with 7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1 not found: ID does not exist" containerID="7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.786091 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1"} err="failed to get container status \"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1\": rpc error: code = NotFound desc = could not find container \"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1\": container with ID starting with 7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1 not found: ID does not exist" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.786121 4955 scope.go:117] "RemoveContainer" containerID="71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" Nov 28 06:39:52 crc kubenswrapper[4955]: E1128 06:39:52.789662 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b\": container with ID starting with 71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b not found: ID does not exist" containerID="71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.789833 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b"} err="failed to get container status \"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b\": rpc error: code = NotFound desc = could not find container \"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b\": container with ID starting with 71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b not found: ID does not exist" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.789938 4955 scope.go:117] "RemoveContainer" containerID="7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.794459 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:52 crc kubenswrapper[4955]: E1128 06:39:52.794928 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api-log" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.794950 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api-log" Nov 28 06:39:52 crc kubenswrapper[4955]: E1128 06:39:52.794960 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232d06a5-6f80-483b-8735-8c51b46df85d" containerName="init" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.794965 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="232d06a5-6f80-483b-8735-8c51b46df85d" containerName="init" Nov 28 06:39:52 crc kubenswrapper[4955]: E1128 06:39:52.794984 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.794990 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.795191 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api-log" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.795213 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" containerName="cinder-api" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.795234 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="232d06a5-6f80-483b-8735-8c51b46df85d" containerName="init" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.796431 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.802652 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.802933 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.803142 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.803348 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1"} err="failed to get container status \"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1\": rpc error: code = NotFound desc = could not find container \"7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1\": container with ID starting with 7593eee067c308ec18d4a1d43e1a62a836fc11f00110f4089cb512adfb3743e1 not found: ID does not exist" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.803436 4955 scope.go:117] "RemoveContainer" containerID="71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.821036 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b"} err="failed to get container status \"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b\": rpc error: code = NotFound desc = could not find container \"71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b\": container with ID starting with 71a06fea839b1d20a6a740b3132c33e7e8cbf658798a8bce937415ca2c9c790b not found: ID does not exist" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.823974 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.837172 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5dfcd47cfc-s75nx" podStartSLOduration=2.222453436 podStartE2EDuration="5.837154114s" podCreationTimestamp="2025-11-28 06:39:47 +0000 UTC" firstStartedPulling="2025-11-28 06:39:48.277970948 +0000 UTC m=+1110.867226518" lastFinishedPulling="2025-11-28 06:39:51.892671626 +0000 UTC m=+1114.481927196" observedRunningTime="2025-11-28 06:39:52.77565293 +0000 UTC m=+1115.364908500" watchObservedRunningTime="2025-11-28 06:39:52.837154114 +0000 UTC m=+1115.426409674" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883402 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883564 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883629 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883676 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-scripts\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883691 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-logs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883748 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883778 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883833 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.883883 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq5z\" (UniqueName: \"kubernetes.io/projected/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-kube-api-access-5fq5z\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.966789 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985738 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-scripts\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-logs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985837 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985880 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985915 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985946 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fq5z\" (UniqueName: \"kubernetes.io/projected/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-kube-api-access-5fq5z\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.985965 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.986029 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.986059 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.987449 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-logs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:52 crc kubenswrapper[4955]: I1128 06:39:52.989568 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.002429 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.002581 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.002771 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-scripts\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.005030 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.006010 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fq5z\" (UniqueName: \"kubernetes.io/projected/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-kube-api-access-5fq5z\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.007089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.007905 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fdcb880-5f80-4347-81ef-f9f5ff9a097b-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fdcb880-5f80-4347-81ef-f9f5ff9a097b\") " pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.088101 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.167092 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189343 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189396 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189445 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189496 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd72c\" (UniqueName: \"kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189693 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.189711 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd\") pod \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\" (UID: \"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb\") " Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.190384 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.194791 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.197747 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c" (OuterVolumeSpecName: "kube-api-access-gd72c") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "kube-api-access-gd72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.209105 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts" (OuterVolumeSpecName: "scripts") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.239896 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.293626 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.293656 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.293667 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.293675 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd72c\" (UniqueName: \"kubernetes.io/projected/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-kube-api-access-gd72c\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.293686 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.296735 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.313444 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data" (OuterVolumeSpecName: "config-data") pod "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" (UID: "0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.393271 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.393747 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.393800 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.394720 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.394773 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9" gracePeriod=600 Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.395244 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.395273 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.510199 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.729091 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba3f0f19-5fce-4278-b609-084ca27e4dfc" path="/var/lib/kubelet/pods/ba3f0f19-5fce-4278-b609-084ca27e4dfc/volumes" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.729997 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 06:39:53 crc kubenswrapper[4955]: W1128 06:39:53.741687 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fdcb880_5f80_4347_81ef_f9f5ff9a097b.slice/crio-a8017046000428042958defc0ec22792c06bf115690ffc7a5e1a02f28970ad9b WatchSource:0}: Error finding container a8017046000428042958defc0ec22792c06bf115690ffc7a5e1a02f28970ad9b: Status 404 returned error can't find the container with id a8017046000428042958defc0ec22792c06bf115690ffc7a5e1a02f28970ad9b Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.765657 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.765681 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb","Type":"ContainerDied","Data":"f2fae82bd903a5c6b77aff568b3b1f0f10bc6ee86e783b5c1d2e9d03e752392e"} Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.765723 4955 scope.go:117] "RemoveContainer" containerID="c060893e7a4c8124aefc06bfdea3d87482b132e20cea2c71b6f85558898f1b42" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.778844 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdcb880-5f80-4347-81ef-f9f5ff9a097b","Type":"ContainerStarted","Data":"a8017046000428042958defc0ec22792c06bf115690ffc7a5e1a02f28970ad9b"} Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.907743 4955 scope.go:117] "RemoveContainer" containerID="7ab072dde9fa67d649689612cb43ec013273f4118a344bf58f8f008a2bd2586e" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.940656 4955 scope.go:117] "RemoveContainer" containerID="c51d62fb4c5ee6832d2e95611a72db7923d2f0d61cb80a135e6e343c7c7fa2db" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.943878 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.955565 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.961565 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:39:53 crc kubenswrapper[4955]: E1128 06:39:53.961891 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="sg-core" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.961906 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="sg-core" Nov 28 06:39:53 crc kubenswrapper[4955]: E1128 06:39:53.961926 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="proxy-httpd" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.961933 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="proxy-httpd" Nov 28 06:39:53 crc kubenswrapper[4955]: E1128 06:39:53.961945 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="ceilometer-notification-agent" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.961952 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="ceilometer-notification-agent" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.962112 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="ceilometer-notification-agent" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.962145 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="proxy-httpd" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.962157 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" containerName="sg-core" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.964016 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.967945 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.968109 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:39:53 crc kubenswrapper[4955]: I1128 06:39:53.981170 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.111598 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.111678 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpd4r\" (UniqueName: \"kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.111710 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.111737 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.111806 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.112175 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.112243 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216143 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216179 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpd4r\" (UniqueName: \"kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216210 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216233 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216250 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216347 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216368 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.216786 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.222147 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.223662 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.229200 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.231078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.248440 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpd4r\" (UniqueName: \"kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.249345 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.298192 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.662560 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b4956ccd4-8qnhx"] Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.664198 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.666837 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.670057 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.673411 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b4956ccd4-8qnhx"] Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.788981 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9" exitCode=0 Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.789052 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9"} Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.789074 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584"} Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.789089 4955 scope.go:117] "RemoveContainer" containerID="4f33502a89d814132c8a3643f347e9c608f66ebfe86f1fe67c34b4729fe71bd9" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.801311 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdcb880-5f80-4347-81ef-f9f5ff9a097b","Type":"ContainerStarted","Data":"05d58a8714e0e3ce7844877823933337a8a8c9b1104af5f84f5396629b7fda8c"} Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.849443 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-public-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.849852 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb37217e-f20a-4e50-b616-b0b1231fbd89-logs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.849891 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data-custom\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.849994 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.850183 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-combined-ca-bundle\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.850289 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-internal-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.850720 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szqs\" (UniqueName: \"kubernetes.io/projected/eb37217e-f20a-4e50-b616-b0b1231fbd89-kube-api-access-2szqs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.915514 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:39:54 crc kubenswrapper[4955]: W1128 06:39:54.921806 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4e26621_fb49_4397_80c0_e4be8cbc7c41.slice/crio-8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca WatchSource:0}: Error finding container 8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca: Status 404 returned error can't find the container with id 8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955074 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szqs\" (UniqueName: \"kubernetes.io/projected/eb37217e-f20a-4e50-b616-b0b1231fbd89-kube-api-access-2szqs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955127 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-public-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955196 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb37217e-f20a-4e50-b616-b0b1231fbd89-logs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955225 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data-custom\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955285 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955341 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-combined-ca-bundle\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.955391 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-internal-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.957687 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb37217e-f20a-4e50-b616-b0b1231fbd89-logs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.965965 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data-custom\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.968073 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-public-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.968318 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-internal-tls-certs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.970987 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-config-data\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.971630 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb37217e-f20a-4e50-b616-b0b1231fbd89-combined-ca-bundle\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:54 crc kubenswrapper[4955]: I1128 06:39:54.994406 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szqs\" (UniqueName: \"kubernetes.io/projected/eb37217e-f20a-4e50-b616-b0b1231fbd89-kube-api-access-2szqs\") pod \"barbican-api-7b4956ccd4-8qnhx\" (UID: \"eb37217e-f20a-4e50-b616-b0b1231fbd89\") " pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.007117 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.493480 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b4956ccd4-8qnhx"] Nov 28 06:39:55 crc kubenswrapper[4955]: W1128 06:39:55.496073 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb37217e_f20a_4e50_b616_b0b1231fbd89.slice/crio-67d238c567148a70c8108953daee54a2a1cf895c450f7095c821c8de46bd0334 WatchSource:0}: Error finding container 67d238c567148a70c8108953daee54a2a1cf895c450f7095c821c8de46bd0334: Status 404 returned error can't find the container with id 67d238c567148a70c8108953daee54a2a1cf895c450f7095c821c8de46bd0334 Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.720633 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb" path="/var/lib/kubelet/pods/0ae81b0d-293c-4ec2-8d42-8fb1fd0c1afb/volumes" Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.840660 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fdcb880-5f80-4347-81ef-f9f5ff9a097b","Type":"ContainerStarted","Data":"633aeb9d391178c623539205e4c41374e6196a1c765ca016bf6fce96f873a565"} Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.843115 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.862799 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b4956ccd4-8qnhx" event={"ID":"eb37217e-f20a-4e50-b616-b0b1231fbd89","Type":"ContainerStarted","Data":"99f8a99718e51e8af27042222177c9e42e77fb791abe981f37e180872091bc52"} Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.862852 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b4956ccd4-8qnhx" event={"ID":"eb37217e-f20a-4e50-b616-b0b1231fbd89","Type":"ContainerStarted","Data":"67d238c567148a70c8108953daee54a2a1cf895c450f7095c821c8de46bd0334"} Nov 28 06:39:55 crc kubenswrapper[4955]: I1128 06:39:55.896719 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerStarted","Data":"8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.398558 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.435326 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.435307572 podStartE2EDuration="4.435307572s" podCreationTimestamp="2025-11-28 06:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:55.877003045 +0000 UTC m=+1118.466258635" watchObservedRunningTime="2025-11-28 06:39:56.435307572 +0000 UTC m=+1119.024563142" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.446130 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6478fb8469-kzjkp" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.496323 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cqw4\" (UniqueName: \"kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4\") pod \"1399b8d3-cee5-44f3-9747-701eb22526a8\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.501400 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts\") pod \"1399b8d3-cee5-44f3-9747-701eb22526a8\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.501467 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data\") pod \"1399b8d3-cee5-44f3-9747-701eb22526a8\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.501498 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs\") pod \"1399b8d3-cee5-44f3-9747-701eb22526a8\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.501537 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key\") pod \"1399b8d3-cee5-44f3-9747-701eb22526a8\" (UID: \"1399b8d3-cee5-44f3-9747-701eb22526a8\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.503453 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4" (OuterVolumeSpecName: "kube-api-access-5cqw4") pod "1399b8d3-cee5-44f3-9747-701eb22526a8" (UID: "1399b8d3-cee5-44f3-9747-701eb22526a8"). InnerVolumeSpecName "kube-api-access-5cqw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.504793 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs" (OuterVolumeSpecName: "logs") pod "1399b8d3-cee5-44f3-9747-701eb22526a8" (UID: "1399b8d3-cee5-44f3-9747-701eb22526a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.511350 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.511586 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76f57c54dd-27284" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-api" containerID="cri-o://0580084a5e1e9d160950ef212cc3a01146f7db7944d1a655618cc2cea72da792" gracePeriod=30 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.511893 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-76f57c54dd-27284" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-httpd" containerID="cri-o://76b9c94ae0d7f0f9995f107a2b75635258e264d11828adba5c4997f976d10f3b" gracePeriod=30 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.518001 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1399b8d3-cee5-44f3-9747-701eb22526a8" (UID: "1399b8d3-cee5-44f3-9747-701eb22526a8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.520919 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.555051 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts" (OuterVolumeSpecName: "scripts") pod "1399b8d3-cee5-44f3-9747-701eb22526a8" (UID: "1399b8d3-cee5-44f3-9747-701eb22526a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.562675 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data" (OuterVolumeSpecName: "config-data") pod "1399b8d3-cee5-44f3-9747-701eb22526a8" (UID: "1399b8d3-cee5-44f3-9747-701eb22526a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.565779 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.618209 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cqw4\" (UniqueName: \"kubernetes.io/projected/1399b8d3-cee5-44f3-9747-701eb22526a8-kube-api-access-5cqw4\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.619560 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.619620 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1399b8d3-cee5-44f3-9747-701eb22526a8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.619634 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1399b8d3-cee5-44f3-9747-701eb22526a8-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.619649 4955 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1399b8d3-cee5-44f3-9747-701eb22526a8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.723791 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts\") pod \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.723882 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data\") pod \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.723904 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfff\" (UniqueName: \"kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff\") pod \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.723923 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs\") pod \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.723959 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key\") pod \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\" (UID: \"ffad8eb2-ac71-461f-a0fc-0203951d3e05\") " Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.726880 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs" (OuterVolumeSpecName: "logs") pod "ffad8eb2-ac71-461f-a0fc-0203951d3e05" (UID: "ffad8eb2-ac71-461f-a0fc-0203951d3e05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.730612 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ffad8eb2-ac71-461f-a0fc-0203951d3e05" (UID: "ffad8eb2-ac71-461f-a0fc-0203951d3e05"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.736756 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff" (OuterVolumeSpecName: "kube-api-access-pvfff") pod "ffad8eb2-ac71-461f-a0fc-0203951d3e05" (UID: "ffad8eb2-ac71-461f-a0fc-0203951d3e05"). InnerVolumeSpecName "kube-api-access-pvfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.755164 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data" (OuterVolumeSpecName: "config-data") pod "ffad8eb2-ac71-461f-a0fc-0203951d3e05" (UID: "ffad8eb2-ac71-461f-a0fc-0203951d3e05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.757765 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts" (OuterVolumeSpecName: "scripts") pod "ffad8eb2-ac71-461f-a0fc-0203951d3e05" (UID: "ffad8eb2-ac71-461f-a0fc-0203951d3e05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.826772 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.826813 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvfff\" (UniqueName: \"kubernetes.io/projected/ffad8eb2-ac71-461f-a0fc-0203951d3e05-kube-api-access-pvfff\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.826825 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffad8eb2-ac71-461f-a0fc-0203951d3e05-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.826833 4955 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ffad8eb2-ac71-461f-a0fc-0203951d3e05-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.826841 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffad8eb2-ac71-461f-a0fc-0203951d3e05-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.909110 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerStarted","Data":"21b63b57ac9ddee1568311525bddd9e857e644749c800af97ab513efbeee524a"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.909153 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerStarted","Data":"7663f57d4616fae3d2c0db5f9458e0b890b81d158ab39247efa411e6cf3e2be2"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911052 4955 generic.go:334] "Generic (PLEG): container finished" podID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerID="768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" exitCode=137 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911082 4955 generic.go:334] "Generic (PLEG): container finished" podID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerID="052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" exitCode=137 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerDied","Data":"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911156 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerDied","Data":"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911157 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c4dc88849-jtrxl" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911174 4955 scope.go:117] "RemoveContainer" containerID="768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.911165 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c4dc88849-jtrxl" event={"ID":"1399b8d3-cee5-44f3-9747-701eb22526a8","Type":"ContainerDied","Data":"5f91a7f8aa0e07523531168cbb37476290c856d1675475b15d999e071b755288"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.916223 4955 generic.go:334] "Generic (PLEG): container finished" podID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerID="76b9c94ae0d7f0f9995f107a2b75635258e264d11828adba5c4997f976d10f3b" exitCode=0 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.916302 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerDied","Data":"76b9c94ae0d7f0f9995f107a2b75635258e264d11828adba5c4997f976d10f3b"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.918453 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b4956ccd4-8qnhx" event={"ID":"eb37217e-f20a-4e50-b616-b0b1231fbd89","Type":"ContainerStarted","Data":"5bb5017c86ef13d68a676234d94178bfe8c08d15c43aefab61e352d99837ed45"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.920016 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.920070 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.937740 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerID="19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" exitCode=137 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.937769 4955 generic.go:334] "Generic (PLEG): container finished" podID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerID="9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" exitCode=137 Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.937874 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerDied","Data":"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.937898 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerDied","Data":"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.937910 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bcf66475f-c4s6x" event={"ID":"ffad8eb2-ac71-461f-a0fc-0203951d3e05","Type":"ContainerDied","Data":"9d71b51d312b5f7882438be7ed9a0ca2c9f96153fca7a7766a5a65d9d4d1a503"} Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.938030 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bcf66475f-c4s6x" Nov 28 06:39:56 crc kubenswrapper[4955]: I1128 06:39:56.959857 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b4956ccd4-8qnhx" podStartSLOduration=2.959836569 podStartE2EDuration="2.959836569s" podCreationTimestamp="2025-11-28 06:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:39:56.95030422 +0000 UTC m=+1119.539559810" watchObservedRunningTime="2025-11-28 06:39:56.959836569 +0000 UTC m=+1119.549092139" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.010543 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.028566 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c4dc88849-jtrxl"] Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.041495 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.045388 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bcf66475f-c4s6x"] Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.080063 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.113726 4955 scope.go:117] "RemoveContainer" containerID="052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.140131 4955 scope.go:117] "RemoveContainer" containerID="768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" Nov 28 06:39:57 crc kubenswrapper[4955]: E1128 06:39:57.140686 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34\": container with ID starting with 768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34 not found: ID does not exist" containerID="768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.140737 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34"} err="failed to get container status \"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34\": rpc error: code = NotFound desc = could not find container \"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34\": container with ID starting with 768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.140764 4955 scope.go:117] "RemoveContainer" containerID="052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" Nov 28 06:39:57 crc kubenswrapper[4955]: E1128 06:39:57.141191 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270\": container with ID starting with 052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270 not found: ID does not exist" containerID="052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141221 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270"} err="failed to get container status \"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270\": rpc error: code = NotFound desc = could not find container \"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270\": container with ID starting with 052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141237 4955 scope.go:117] "RemoveContainer" containerID="768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141547 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34"} err="failed to get container status \"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34\": rpc error: code = NotFound desc = could not find container \"768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34\": container with ID starting with 768a2a4e4159e60fafa87cc085d71be075f5da65bc58334c8597ab54968bbb34 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141600 4955 scope.go:117] "RemoveContainer" containerID="052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141936 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270"} err="failed to get container status \"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270\": rpc error: code = NotFound desc = could not find container \"052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270\": container with ID starting with 052ccfafe78fdcde988fe8743088d59865bfcc047f31467e5b8a37b78ac85270 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.141954 4955 scope.go:117] "RemoveContainer" containerID="19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.323724 4955 scope.go:117] "RemoveContainer" containerID="9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439011 4955 scope.go:117] "RemoveContainer" containerID="19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" Nov 28 06:39:57 crc kubenswrapper[4955]: E1128 06:39:57.439439 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5\": container with ID starting with 19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5 not found: ID does not exist" containerID="19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439471 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5"} err="failed to get container status \"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5\": rpc error: code = NotFound desc = could not find container \"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5\": container with ID starting with 19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439499 4955 scope.go:117] "RemoveContainer" containerID="9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" Nov 28 06:39:57 crc kubenswrapper[4955]: E1128 06:39:57.439704 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420\": container with ID starting with 9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420 not found: ID does not exist" containerID="9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439719 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420"} err="failed to get container status \"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420\": rpc error: code = NotFound desc = could not find container \"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420\": container with ID starting with 9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439731 4955 scope.go:117] "RemoveContainer" containerID="19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439944 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5"} err="failed to get container status \"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5\": rpc error: code = NotFound desc = could not find container \"19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5\": container with ID starting with 19e33ca8158a100c9453e049c2b093cbbd96192e878cc0edd04c380dd22afdf5 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.439961 4955 scope.go:117] "RemoveContainer" containerID="9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.440248 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420"} err="failed to get container status \"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420\": rpc error: code = NotFound desc = could not find container \"9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420\": container with ID starting with 9b07d3e74dfb41bc1ee8e97b15b1efa34c52ead9514bde90bed8ede5e5dc6420 not found: ID does not exist" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.714011 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" path="/var/lib/kubelet/pods/1399b8d3-cee5-44f3-9747-701eb22526a8/volumes" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.714700 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" path="/var/lib/kubelet/pods/ffad8eb2-ac71-461f-a0fc-0203951d3e05/volumes" Nov 28 06:39:57 crc kubenswrapper[4955]: I1128 06:39:57.951432 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerStarted","Data":"9989fc09f844dc95050233e5333c9f54ec3ab1c2e0968b70db20aa5611b07e56"} Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.216709 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.235653 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.359562 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.359828 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="dnsmasq-dns" containerID="cri-o://4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504" gracePeriod=10 Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.370013 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.495963 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.652592 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.885493 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989024 4955 generic.go:334] "Generic (PLEG): container finished" podID="00785eeb-47d1-4a5e-9c69-489af5075748" containerID="4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504" exitCode=0 Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989377 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" event={"ID":"00785eeb-47d1-4a5e-9c69-489af5075748","Type":"ContainerDied","Data":"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504"} Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" event={"ID":"00785eeb-47d1-4a5e-9c69-489af5075748","Type":"ContainerDied","Data":"318962e847110c6253a605d800f59e7f691928c203102735863cf0ebfc646c33"} Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989433 4955 scope.go:117] "RemoveContainer" containerID="4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989350 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84b966f6c9-8mlfs" Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.989723 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="cinder-scheduler" containerID="cri-o://5e657792d797d8dd80d3389a6ce634a026a69a2f7dad6b338591a9238374bf5b" gracePeriod=30 Nov 28 06:39:58 crc kubenswrapper[4955]: I1128 06:39:58.990138 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="probe" containerID="cri-o://4f148f34a0128f06ebc544ccfd223e6a40008e6511a072cd56adf5d514a80fd8" gracePeriod=30 Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.042795 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-56f45c5b6-nqg9b" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.076929 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.077426 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.077567 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.077688 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.077845 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cjmx\" (UniqueName: \"kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.077986 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb\") pod \"00785eeb-47d1-4a5e-9c69-489af5075748\" (UID: \"00785eeb-47d1-4a5e-9c69-489af5075748\") " Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.115588 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx" (OuterVolumeSpecName: "kube-api-access-8cjmx") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "kube-api-access-8cjmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.150475 4955 scope.go:117] "RemoveContainer" containerID="9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.154568 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.175959 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.176426 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon-log" containerID="cri-o://04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26" gracePeriod=30 Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.177073 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" containerID="cri-o://0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2" gracePeriod=30 Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.181343 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.181366 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cjmx\" (UniqueName: \"kubernetes.io/projected/00785eeb-47d1-4a5e-9c69-489af5075748-kube-api-access-8cjmx\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.193135 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config" (OuterVolumeSpecName: "config") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.199663 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.201389 4955 scope.go:117] "RemoveContainer" containerID="4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504" Nov 28 06:39:59 crc kubenswrapper[4955]: E1128 06:39:59.204814 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504\": container with ID starting with 4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504 not found: ID does not exist" containerID="4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.204965 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504"} err="failed to get container status \"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504\": rpc error: code = NotFound desc = could not find container \"4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504\": container with ID starting with 4270fbac9dab469c988bb6e4fb03aaa9c055c3acf9088c71add8b42ef00af504 not found: ID does not exist" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.205091 4955 scope.go:117] "RemoveContainer" containerID="9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457" Nov 28 06:39:59 crc kubenswrapper[4955]: E1128 06:39:59.205565 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457\": container with ID starting with 9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457 not found: ID does not exist" containerID="9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.205619 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457"} err="failed to get container status \"9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457\": rpc error: code = NotFound desc = could not find container \"9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457\": container with ID starting with 9e6fb050363ade627628a62db63451fc029bc56de92b97045f5090824e8a6457 not found: ID does not exist" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.225474 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.246041 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00785eeb-47d1-4a5e-9c69-489af5075748" (UID: "00785eeb-47d1-4a5e-9c69-489af5075748"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.283072 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.283107 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.283116 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.283124 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00785eeb-47d1-4a5e-9c69-489af5075748-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.322809 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.327766 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84b966f6c9-8mlfs"] Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.371005 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.462147 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:39:59 crc kubenswrapper[4955]: I1128 06:39:59.716088 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" path="/var/lib/kubelet/pods/00785eeb-47d1-4a5e-9c69-489af5075748/volumes" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.001922 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerStarted","Data":"f89611ce3bda246149ac6c272b666e22c658dd19303f4af902bb286f617b7092"} Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.002204 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.009523 4955 generic.go:334] "Generic (PLEG): container finished" podID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerID="4f148f34a0128f06ebc544ccfd223e6a40008e6511a072cd56adf5d514a80fd8" exitCode=0 Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.009582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerDied","Data":"4f148f34a0128f06ebc544ccfd223e6a40008e6511a072cd56adf5d514a80fd8"} Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.015901 4955 generic.go:334] "Generic (PLEG): container finished" podID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerID="0580084a5e1e9d160950ef212cc3a01146f7db7944d1a655618cc2cea72da792" exitCode=0 Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.016086 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerDied","Data":"0580084a5e1e9d160950ef212cc3a01146f7db7944d1a655618cc2cea72da792"} Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.039452 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.017293554 podStartE2EDuration="7.039437452s" podCreationTimestamp="2025-11-28 06:39:53 +0000 UTC" firstStartedPulling="2025-11-28 06:39:54.924697725 +0000 UTC m=+1117.513953295" lastFinishedPulling="2025-11-28 06:39:58.946841623 +0000 UTC m=+1121.536097193" observedRunningTime="2025-11-28 06:40:00.033432349 +0000 UTC m=+1122.622687919" watchObservedRunningTime="2025-11-28 06:40:00.039437452 +0000 UTC m=+1122.628693022" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.145958 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.309468 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config\") pod \"56f0f65a-7f13-4483-9806-7fa8d2738a27\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.309658 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config\") pod \"56f0f65a-7f13-4483-9806-7fa8d2738a27\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.309737 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs\") pod \"56f0f65a-7f13-4483-9806-7fa8d2738a27\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.309845 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle\") pod \"56f0f65a-7f13-4483-9806-7fa8d2738a27\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.309958 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9fvq\" (UniqueName: \"kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq\") pod \"56f0f65a-7f13-4483-9806-7fa8d2738a27\" (UID: \"56f0f65a-7f13-4483-9806-7fa8d2738a27\") " Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.317377 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "56f0f65a-7f13-4483-9806-7fa8d2738a27" (UID: "56f0f65a-7f13-4483-9806-7fa8d2738a27"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.317411 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq" (OuterVolumeSpecName: "kube-api-access-h9fvq") pod "56f0f65a-7f13-4483-9806-7fa8d2738a27" (UID: "56f0f65a-7f13-4483-9806-7fa8d2738a27"). InnerVolumeSpecName "kube-api-access-h9fvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.361343 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config" (OuterVolumeSpecName: "config") pod "56f0f65a-7f13-4483-9806-7fa8d2738a27" (UID: "56f0f65a-7f13-4483-9806-7fa8d2738a27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.380448 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f0f65a-7f13-4483-9806-7fa8d2738a27" (UID: "56f0f65a-7f13-4483-9806-7fa8d2738a27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.413181 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.413214 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.413226 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.413309 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9fvq\" (UniqueName: \"kubernetes.io/projected/56f0f65a-7f13-4483-9806-7fa8d2738a27-kube-api-access-h9fvq\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.414737 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "56f0f65a-7f13-4483-9806-7fa8d2738a27" (UID: "56f0f65a-7f13-4483-9806-7fa8d2738a27"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:00 crc kubenswrapper[4955]: I1128 06:40:00.514739 4955 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f0f65a-7f13-4483-9806-7fa8d2738a27-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.043874 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-76f57c54dd-27284" Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.048744 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-76f57c54dd-27284" event={"ID":"56f0f65a-7f13-4483-9806-7fa8d2738a27","Type":"ContainerDied","Data":"4995f313a244b8354071b34ba2655aae9621a7de5dcef26d180b250334246633"} Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.049048 4955 scope.go:117] "RemoveContainer" containerID="76b9c94ae0d7f0f9995f107a2b75635258e264d11828adba5c4997f976d10f3b" Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.085158 4955 scope.go:117] "RemoveContainer" containerID="0580084a5e1e9d160950ef212cc3a01146f7db7944d1a655618cc2cea72da792" Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.102420 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.112960 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-76f57c54dd-27284"] Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.505727 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:40:01 crc kubenswrapper[4955]: I1128 06:40:01.721827 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" path="/var/lib/kubelet/pods/56f0f65a-7f13-4483-9806-7fa8d2738a27/volumes" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.049969 4955 generic.go:334] "Generic (PLEG): container finished" podID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerID="5e657792d797d8dd80d3389a6ce634a026a69a2f7dad6b338591a9238374bf5b" exitCode=0 Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.050235 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerDied","Data":"5e657792d797d8dd80d3389a6ce634a026a69a2f7dad6b338591a9238374bf5b"} Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.162762 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358106 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358454 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358517 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358574 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358640 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vctsc\" (UniqueName: \"kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.358707 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts\") pod \"4e953827-0dd0-4148-81de-bfb2ab8942bd\" (UID: \"4e953827-0dd0-4148-81de-bfb2ab8942bd\") " Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.359912 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.370002 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc" (OuterVolumeSpecName: "kube-api-access-vctsc") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "kube-api-access-vctsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.370498 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.383781 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts" (OuterVolumeSpecName: "scripts") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.448220 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.461522 4955 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4e953827-0dd0-4148-81de-bfb2ab8942bd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.461555 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.461566 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.461575 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vctsc\" (UniqueName: \"kubernetes.io/projected/4e953827-0dd0-4148-81de-bfb2ab8942bd-kube-api-access-vctsc\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.461587 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.489770 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data" (OuterVolumeSpecName: "config-data") pod "4e953827-0dd0-4148-81de-bfb2ab8942bd" (UID: "4e953827-0dd0-4148-81de-bfb2ab8942bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:02 crc kubenswrapper[4955]: I1128 06:40:02.562897 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e953827-0dd0-4148-81de-bfb2ab8942bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.058915 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4e953827-0dd0-4148-81de-bfb2ab8942bd","Type":"ContainerDied","Data":"3007d8ac2fda9b6cc09d3c40bece788f4d21d5996230c299ae5138357e9d1b25"} Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.058976 4955 scope.go:117] "RemoveContainer" containerID="4f148f34a0128f06ebc544ccfd223e6a40008e6511a072cd56adf5d514a80fd8" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.059127 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.071356 4955 generic.go:334] "Generic (PLEG): container finished" podID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerID="0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2" exitCode=0 Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.071396 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerDied","Data":"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2"} Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.100071 4955 scope.go:117] "RemoveContainer" containerID="5e657792d797d8dd80d3389a6ce634a026a69a2f7dad6b338591a9238374bf5b" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.102242 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.112445 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.134573 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135008 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="init" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135032 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="init" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135049 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-api" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135058 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-api" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135079 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135088 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135102 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="probe" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135109 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="probe" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135126 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135136 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135151 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135159 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135179 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="cinder-scheduler" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135188 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="cinder-scheduler" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135203 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-httpd" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135210 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-httpd" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135229 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135237 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: E1128 06:40:03.135259 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="dnsmasq-dns" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135267 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="dnsmasq-dns" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135463 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="00785eeb-47d1-4a5e-9c69-489af5075748" containerName="dnsmasq-dns" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135478 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135490 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-httpd" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135518 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135530 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="cinder-scheduler" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135542 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffad8eb2-ac71-461f-a0fc-0203951d3e05" containerName="horizon" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135554 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" containerName="probe" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135564 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="1399b8d3-cee5-44f3-9747-701eb22526a8" containerName="horizon-log" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.135585 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f0f65a-7f13-4483-9806-7fa8d2738a27" containerName="neutron-api" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.136882 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.139356 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.143920 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.197604 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b4956ccd4-8qnhx" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.256307 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.257311 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fb779f59d-wm9cr" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api-log" containerID="cri-o://1712c9c1d6ab3ff40c3ff583338abb2ae2d9ed67e47719cad0c670dc0fac4be4" gracePeriod=30 Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.257581 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fb779f59d-wm9cr" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" containerID="cri-o://e17470f6dab2e167f7f7013223651e42e2e959bec2ae84c35b28ac4b3b12b575" gracePeriod=30 Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.268659 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5fb779f59d-wm9cr" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.276774 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.276839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.276900 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6dv\" (UniqueName: \"kubernetes.io/projected/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-kube-api-access-pn6dv\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.276925 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.277006 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.277033 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378647 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378705 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378772 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6dv\" (UniqueName: \"kubernetes.io/projected/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-kube-api-access-pn6dv\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378800 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.378895 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.379289 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.387894 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-scripts\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.388224 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.389110 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.390096 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.401061 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6dv\" (UniqueName: \"kubernetes.io/projected/7f74ee90-8d6d-42f1-8aa7-61d06d62f07c-kube-api-access-pn6dv\") pod \"cinder-scheduler-0\" (UID: \"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c\") " pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.465103 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.715064 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e953827-0dd0-4148-81de-bfb2ab8942bd" path="/var/lib/kubelet/pods/4e953827-0dd0-4148-81de-bfb2ab8942bd/volumes" Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.926230 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 06:40:03 crc kubenswrapper[4955]: W1128 06:40:03.933038 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f74ee90_8d6d_42f1_8aa7_61d06d62f07c.slice/crio-04b1f21623c216a542ca399f1f3ff073ad1655b59044ac517c02c6ca09916118 WatchSource:0}: Error finding container 04b1f21623c216a542ca399f1f3ff073ad1655b59044ac517c02c6ca09916118: Status 404 returned error can't find the container with id 04b1f21623c216a542ca399f1f3ff073ad1655b59044ac517c02c6ca09916118 Nov 28 06:40:03 crc kubenswrapper[4955]: I1128 06:40:03.944449 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 28 06:40:04 crc kubenswrapper[4955]: I1128 06:40:04.087599 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerID="1712c9c1d6ab3ff40c3ff583338abb2ae2d9ed67e47719cad0c670dc0fac4be4" exitCode=143 Nov 28 06:40:04 crc kubenswrapper[4955]: I1128 06:40:04.087757 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerDied","Data":"1712c9c1d6ab3ff40c3ff583338abb2ae2d9ed67e47719cad0c670dc0fac4be4"} Nov 28 06:40:04 crc kubenswrapper[4955]: I1128 06:40:04.091843 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c","Type":"ContainerStarted","Data":"04b1f21623c216a542ca399f1f3ff073ad1655b59044ac517c02c6ca09916118"} Nov 28 06:40:05 crc kubenswrapper[4955]: I1128 06:40:05.041238 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 06:40:05 crc kubenswrapper[4955]: I1128 06:40:05.115664 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c","Type":"ContainerStarted","Data":"d686c410d9e71b96f1ba2839512ad8d7783fd7a76f9c9b6acad972e24c653db6"} Nov 28 06:40:06 crc kubenswrapper[4955]: I1128 06:40:06.126767 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7f74ee90-8d6d-42f1-8aa7-61d06d62f07c","Type":"ContainerStarted","Data":"0d92347f3893cd6706587e593ec8ea69df9eec02b7d923d0bfcc2344f1e5fd59"} Nov 28 06:40:06 crc kubenswrapper[4955]: I1128 06:40:06.155148 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.155129285 podStartE2EDuration="3.155129285s" podCreationTimestamp="2025-11-28 06:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:06.15310939 +0000 UTC m=+1128.742364990" watchObservedRunningTime="2025-11-28 06:40:06.155129285 +0000 UTC m=+1128.744384865" Nov 28 06:40:07 crc kubenswrapper[4955]: I1128 06:40:07.674274 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5fb779f59d-wm9cr" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:44720->10.217.0.161:9311: read: connection reset by peer" Nov 28 06:40:07 crc kubenswrapper[4955]: I1128 06:40:07.674316 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5fb779f59d-wm9cr" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:44736->10.217.0.161:9311: read: connection reset by peer" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.171818 4955 generic.go:334] "Generic (PLEG): container finished" podID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerID="e17470f6dab2e167f7f7013223651e42e2e959bec2ae84c35b28ac4b3b12b575" exitCode=0 Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.171859 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerDied","Data":"e17470f6dab2e167f7f7013223651e42e2e959bec2ae84c35b28ac4b3b12b575"} Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.171889 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fb779f59d-wm9cr" event={"ID":"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6","Type":"ContainerDied","Data":"7ea2f9d31a77702ff5507990570e7f9bbabe6738f2f4ff05f57f5af18dde1f64"} Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.171902 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ea2f9d31a77702ff5507990570e7f9bbabe6738f2f4ff05f57f5af18dde1f64" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.189674 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.286950 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data\") pod \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.287040 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom\") pod \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.287072 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs\") pod \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.287112 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle\") pod \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.287156 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzwks\" (UniqueName: \"kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks\") pod \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\" (UID: \"f2b6c694-f22a-4c7e-80a6-1fc37af59ce6\") " Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.287637 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs" (OuterVolumeSpecName: "logs") pod "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" (UID: "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.289921 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.293128 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" (UID: "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.307332 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks" (OuterVolumeSpecName: "kube-api-access-mzwks") pod "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" (UID: "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6"). InnerVolumeSpecName "kube-api-access-mzwks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.318282 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d446689d4-fvjm6" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.351683 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" (UID: "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.377052 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data" (OuterVolumeSpecName: "config-data") pod "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" (UID: "f2b6c694-f22a-4c7e-80a6-1fc37af59ce6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.389024 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzwks\" (UniqueName: \"kubernetes.io/projected/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-kube-api-access-mzwks\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.389051 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.389061 4955 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.389070 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.389078 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.465986 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 06:40:08 crc kubenswrapper[4955]: I1128 06:40:08.944722 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-766d6648f9-vfvxt" Nov 28 06:40:09 crc kubenswrapper[4955]: I1128 06:40:09.179874 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fb779f59d-wm9cr" Nov 28 06:40:09 crc kubenswrapper[4955]: I1128 06:40:09.210991 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:40:09 crc kubenswrapper[4955]: I1128 06:40:09.219474 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5fb779f59d-wm9cr"] Nov 28 06:40:09 crc kubenswrapper[4955]: I1128 06:40:09.717245 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" path="/var/lib/kubelet/pods/f2b6c694-f22a-4c7e-80a6-1fc37af59ce6/volumes" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.044948 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 06:40:12 crc kubenswrapper[4955]: E1128 06:40:12.045337 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api-log" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.045350 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api-log" Nov 28 06:40:12 crc kubenswrapper[4955]: E1128 06:40:12.045376 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.045382 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.045563 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api-log" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.045575 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b6c694-f22a-4c7e-80a6-1fc37af59ce6" containerName="barbican-api" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.046128 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.048269 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.048665 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.049490 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8fn2p" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.052733 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.170108 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f979z\" (UniqueName: \"kubernetes.io/projected/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-kube-api-access-f979z\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.170164 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config-secret\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.170188 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.170350 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.272929 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.273579 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f979z\" (UniqueName: \"kubernetes.io/projected/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-kube-api-access-f979z\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.273624 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config-secret\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.273645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.274855 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.279727 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.293109 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-openstack-config-secret\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.299179 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f979z\" (UniqueName: \"kubernetes.io/projected/98ccf66c-347b-4fbe-9b2e-974e15e3eea7-kube-api-access-f979z\") pod \"openstackclient\" (UID: \"98ccf66c-347b-4fbe-9b2e-974e15e3eea7\") " pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.363173 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 06:40:12 crc kubenswrapper[4955]: I1128 06:40:12.844766 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 06:40:13 crc kubenswrapper[4955]: I1128 06:40:13.214039 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"98ccf66c-347b-4fbe-9b2e-974e15e3eea7","Type":"ContainerStarted","Data":"6b970ee554cc587fee3fd3e4ff8ade4fa109e090cbe12d1acb975c82a71745ae"} Nov 28 06:40:13 crc kubenswrapper[4955]: I1128 06:40:13.721094 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 06:40:13 crc kubenswrapper[4955]: I1128 06:40:13.944753 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.319729 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5d59886dc-t4pgs"] Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.321968 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.324097 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.324275 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.325029 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.335775 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d59886dc-t4pgs"] Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.464984 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-public-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.465061 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-etc-swift\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.465440 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcfmd\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-kube-api-access-pcfmd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.465692 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-log-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.465809 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-run-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.466344 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-combined-ca-bundle\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.466391 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-internal-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.466470 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-config-data\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.568881 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-combined-ca-bundle\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.568950 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-internal-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.568982 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-config-data\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.569033 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-public-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.569062 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-etc-swift\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.569170 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcfmd\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-kube-api-access-pcfmd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.569199 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-log-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.569237 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-run-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.570161 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-run-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.571451 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91783657-7b6c-4053-9c14-aed825d54a73-log-httpd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.577560 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-internal-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.580346 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-config-data\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.583758 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-combined-ca-bundle\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.585831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91783657-7b6c-4053-9c14-aed825d54a73-public-tls-certs\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.589212 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-etc-swift\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.593952 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcfmd\" (UniqueName: \"kubernetes.io/projected/91783657-7b6c-4053-9c14-aed825d54a73-kube-api-access-pcfmd\") pod \"swift-proxy-5d59886dc-t4pgs\" (UID: \"91783657-7b6c-4053-9c14-aed825d54a73\") " pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:15 crc kubenswrapper[4955]: I1128 06:40:15.653031 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.264227 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5d59886dc-t4pgs"] Nov 28 06:40:16 crc kubenswrapper[4955]: W1128 06:40:16.287679 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91783657_7b6c_4053_9c14_aed825d54a73.slice/crio-b6cf41d2cfec3bc43b7f4fc11948ce20269d1d0a5db65c0b1ba49c15c657dc75 WatchSource:0}: Error finding container b6cf41d2cfec3bc43b7f4fc11948ce20269d1d0a5db65c0b1ba49c15c657dc75: Status 404 returned error can't find the container with id b6cf41d2cfec3bc43b7f4fc11948ce20269d1d0a5db65c0b1ba49c15c657dc75 Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.416696 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.417081 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-central-agent" containerID="cri-o://21b63b57ac9ddee1568311525bddd9e857e644749c800af97ab513efbeee524a" gracePeriod=30 Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.417926 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="sg-core" containerID="cri-o://9989fc09f844dc95050233e5333c9f54ec3ab1c2e0968b70db20aa5611b07e56" gracePeriod=30 Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.418092 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" containerID="cri-o://f89611ce3bda246149ac6c272b666e22c658dd19303f4af902bb286f617b7092" gracePeriod=30 Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.418172 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-notification-agent" containerID="cri-o://7663f57d4616fae3d2c0db5f9458e0b890b81d158ab39247efa411e6cf3e2be2" gracePeriod=30 Nov 28 06:40:16 crc kubenswrapper[4955]: I1128 06:40:16.536856 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": read tcp 10.217.0.2:37798->10.217.0.166:3000: read: connection reset by peer" Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.260753 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d59886dc-t4pgs" event={"ID":"91783657-7b6c-4053-9c14-aed825d54a73","Type":"ContainerStarted","Data":"8739cb0adbebcc2c0e7bbfa595322b264c98caa02095cd3f72c306af01e39b82"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.261124 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d59886dc-t4pgs" event={"ID":"91783657-7b6c-4053-9c14-aed825d54a73","Type":"ContainerStarted","Data":"196e7fce68914e8e94d1ae92d7f449daa2c86e5fe364d36836d5264d444e9a77"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.261148 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.261158 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5d59886dc-t4pgs" event={"ID":"91783657-7b6c-4053-9c14-aed825d54a73","Type":"ContainerStarted","Data":"b6cf41d2cfec3bc43b7f4fc11948ce20269d1d0a5db65c0b1ba49c15c657dc75"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266726 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerID="f89611ce3bda246149ac6c272b666e22c658dd19303f4af902bb286f617b7092" exitCode=0 Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266759 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerID="9989fc09f844dc95050233e5333c9f54ec3ab1c2e0968b70db20aa5611b07e56" exitCode=2 Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266770 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerID="21b63b57ac9ddee1568311525bddd9e857e644749c800af97ab513efbeee524a" exitCode=0 Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266797 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerDied","Data":"f89611ce3bda246149ac6c272b666e22c658dd19303f4af902bb286f617b7092"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266827 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerDied","Data":"9989fc09f844dc95050233e5333c9f54ec3ab1c2e0968b70db20aa5611b07e56"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.266841 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerDied","Data":"21b63b57ac9ddee1568311525bddd9e857e644749c800af97ab513efbeee524a"} Nov 28 06:40:17 crc kubenswrapper[4955]: I1128 06:40:17.291892 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5d59886dc-t4pgs" podStartSLOduration=2.291870345 podStartE2EDuration="2.291870345s" podCreationTimestamp="2025-11-28 06:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:17.279729974 +0000 UTC m=+1139.868985574" watchObservedRunningTime="2025-11-28 06:40:17.291870345 +0000 UTC m=+1139.881125925" Nov 28 06:40:18 crc kubenswrapper[4955]: I1128 06:40:18.277122 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:19 crc kubenswrapper[4955]: I1128 06:40:19.291877 4955 generic.go:334] "Generic (PLEG): container finished" podID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerID="7663f57d4616fae3d2c0db5f9458e0b890b81d158ab39247efa411e6cf3e2be2" exitCode=0 Nov 28 06:40:19 crc kubenswrapper[4955]: I1128 06:40:19.291959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerDied","Data":"7663f57d4616fae3d2c0db5f9458e0b890b81d158ab39247efa411e6cf3e2be2"} Nov 28 06:40:19 crc kubenswrapper[4955]: I1128 06:40:19.962263 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-b2j6d"] Nov 28 06:40:19 crc kubenswrapper[4955]: I1128 06:40:19.964819 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:19 crc kubenswrapper[4955]: I1128 06:40:19.979909 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b2j6d"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.033968 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pgx4c"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.037550 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.044178 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pgx4c"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.061938 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7svxt\" (UniqueName: \"kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.062083 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.142499 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fc74-account-create-update-zxq7m"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.144055 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.146091 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.156619 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fc74-account-create-update-zxq7m"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.164116 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjh4\" (UniqueName: \"kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.164194 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.164222 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7svxt\" (UniqueName: \"kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.164313 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.165201 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.197459 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7svxt\" (UniqueName: \"kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt\") pod \"nova-api-db-create-b2j6d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.247588 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-j944d"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.249059 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.258447 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j944d"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.265554 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbjh4\" (UniqueName: \"kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.265629 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.265661 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.265688 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph9b5\" (UniqueName: \"kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.266452 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.283197 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbjh4\" (UniqueName: \"kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4\") pod \"nova-cell0-db-create-pgx4c\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.288708 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.344738 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-32f5-account-create-update-wd6jr"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.346170 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.348656 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.354620 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.360655 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-32f5-account-create-update-wd6jr"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.368117 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.368163 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.368193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph9b5\" (UniqueName: \"kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.368946 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt8lt\" (UniqueName: \"kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.374693 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.384883 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph9b5\" (UniqueName: \"kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5\") pod \"nova-api-fc74-account-create-update-zxq7m\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.460829 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.470794 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.470874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkwdr\" (UniqueName: \"kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.470993 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.471025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt8lt\" (UniqueName: \"kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.472205 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.487534 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt8lt\" (UniqueName: \"kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt\") pod \"nova-cell1-db-create-j944d\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.551836 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-38fa-account-create-update-l7s8x"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.553103 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.554994 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.569774 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.573100 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkwdr\" (UniqueName: \"kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.573208 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.573753 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.577484 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-38fa-account-create-update-l7s8x"] Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.594217 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkwdr\" (UniqueName: \"kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr\") pod \"nova-cell0-32f5-account-create-update-wd6jr\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.674461 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.674640 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fp7k\" (UniqueName: \"kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.674873 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.779639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fp7k\" (UniqueName: \"kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.779813 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.780618 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.795265 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fp7k\" (UniqueName: \"kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k\") pod \"nova-cell1-38fa-account-create-update-l7s8x\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:20 crc kubenswrapper[4955]: I1128 06:40:20.872973 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:21 crc kubenswrapper[4955]: I1128 06:40:21.845087 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:21 crc kubenswrapper[4955]: I1128 06:40:21.845335 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-log" containerID="cri-o://2791ed070156d863d7468ff0af4b05b241b6873365d1545322d7b4f9d648d74e" gracePeriod=30 Nov 28 06:40:21 crc kubenswrapper[4955]: I1128 06:40:21.845427 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-httpd" containerID="cri-o://349206e3158d352a88674cc03d7ee8e0af33b899d8b967adc29c619293006bb5" gracePeriod=30 Nov 28 06:40:22 crc kubenswrapper[4955]: I1128 06:40:22.329742 4955 generic.go:334] "Generic (PLEG): container finished" podID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerID="2791ed070156d863d7468ff0af4b05b241b6873365d1545322d7b4f9d648d74e" exitCode=143 Nov 28 06:40:22 crc kubenswrapper[4955]: I1128 06:40:22.329842 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerDied","Data":"2791ed070156d863d7468ff0af4b05b241b6873365d1545322d7b4f9d648d74e"} Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.328973 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pgx4c"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.647695 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-32f5-account-create-update-wd6jr"] Nov 28 06:40:23 crc kubenswrapper[4955]: W1128 06:40:23.651542 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530951b8_e0b5_44a3_aaf5_48c74bf91dba.slice/crio-d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b WatchSource:0}: Error finding container d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b: Status 404 returned error can't find the container with id d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.722304 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fc74-account-create-update-zxq7m"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.728519 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-38fa-account-create-update-l7s8x"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.815402 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b2j6d"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.827751 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j944d"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.872342 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.872594 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-log" containerID="cri-o://c16dad1f89126d44dfe03e59d9908d81ed918475b03bdd78dcfc01052b9f88e4" gracePeriod=30 Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.872780 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-httpd" containerID="cri-o://d6f084fb3be406b3cfb2874cf1b7986614b2594cb7847501b9fcf7eec3d9a640" gracePeriod=30 Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.945074 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-9c465b4d8-cslvv" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Nov 28 06:40:23 crc kubenswrapper[4955]: I1128 06:40:23.945170 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.373938 4955 generic.go:334] "Generic (PLEG): container finished" podID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerID="c16dad1f89126d44dfe03e59d9908d81ed918475b03bdd78dcfc01052b9f88e4" exitCode=143 Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.375013 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerDied","Data":"c16dad1f89126d44dfe03e59d9908d81ed918475b03bdd78dcfc01052b9f88e4"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.376757 4955 generic.go:334] "Generic (PLEG): container finished" podID="123962fc-cb22-41e5-92c2-fce487c07003" containerID="c3e614bbaad971ec28217fa5ede2c05ba656679b8cbef022e619f4af6bed57ea" exitCode=0 Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.376834 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pgx4c" event={"ID":"123962fc-cb22-41e5-92c2-fce487c07003","Type":"ContainerDied","Data":"c3e614bbaad971ec28217fa5ede2c05ba656679b8cbef022e619f4af6bed57ea"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.376883 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pgx4c" event={"ID":"123962fc-cb22-41e5-92c2-fce487c07003","Type":"ContainerStarted","Data":"e0237908e5700c659d23a753b11811ac231fd2b9c3896e82bd9e5bd35e2c9f69"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.377776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j944d" event={"ID":"a3b5c523-cb74-4ad4-b14b-0eefd62138b1","Type":"ContainerStarted","Data":"96fe75c176d4c166d5e7b5e6be133726ee3ecffc0e940ae4fc2eea44180e63c1"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.379290 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" event={"ID":"c059ac08-bd13-4b84-a39f-f1a9e2260b5e","Type":"ContainerStarted","Data":"5875f58846ca9808fcd2e50acb546eadaa6919541781087c149e0d35bfa2ef88"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.379397 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" event={"ID":"c059ac08-bd13-4b84-a39f-f1a9e2260b5e","Type":"ContainerStarted","Data":"36a6928a75b796286d677dbb1e9a7b69e67094126cda3f7a45c850afb5ea0a37"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.382313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a4e26621-fb49-4397-80c0-e4be8cbc7c41","Type":"ContainerDied","Data":"8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.382344 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4a76fee989012224876e50ac30964cd855703013e000069e435da60d8ef4ca" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.383933 4955 generic.go:334] "Generic (PLEG): container finished" podID="530951b8-e0b5-44a3-aaf5-48c74bf91dba" containerID="bcdc956db5ad451139028ac7ba3d67c105b88c3119cc68aee02e711c5bccb203" exitCode=0 Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.384002 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" event={"ID":"530951b8-e0b5-44a3-aaf5-48c74bf91dba","Type":"ContainerDied","Data":"bcdc956db5ad451139028ac7ba3d67c105b88c3119cc68aee02e711c5bccb203"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.384028 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" event={"ID":"530951b8-e0b5-44a3-aaf5-48c74bf91dba","Type":"ContainerStarted","Data":"d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.385658 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fc74-account-create-update-zxq7m" event={"ID":"071d5666-d995-439f-b56a-c4feb0b11ce1","Type":"ContainerStarted","Data":"539518ae979c619e26be06d94055175af6fc8d73359f9303576a6d8784eec246"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.385700 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fc74-account-create-update-zxq7m" event={"ID":"071d5666-d995-439f-b56a-c4feb0b11ce1","Type":"ContainerStarted","Data":"ae7024d06f8a54c2a6f1914f4278a0ae7e8dfca3d9b81f23ba7eebf713351994"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.387755 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b2j6d" event={"ID":"df327a4d-740c-44af-aeab-e196f406408d","Type":"ContainerStarted","Data":"526aa47dee46caf9ee0efe06b392a45da00f59f86ad52873f28fdc687cb3f9fa"} Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.406612 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-fc74-account-create-update-zxq7m" podStartSLOduration=4.406597241 podStartE2EDuration="4.406597241s" podCreationTimestamp="2025-11-28 06:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:24.405793019 +0000 UTC m=+1146.995048589" watchObservedRunningTime="2025-11-28 06:40:24.406597241 +0000 UTC m=+1146.995852811" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.433391 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" podStartSLOduration=4.4333781 podStartE2EDuration="4.4333781s" podCreationTimestamp="2025-11-28 06:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:24.427837669 +0000 UTC m=+1147.017093239" watchObservedRunningTime="2025-11-28 06:40:24.4333781 +0000 UTC m=+1147.022633670" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.474445 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.558990 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559057 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559098 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559159 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559219 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559259 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.559332 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpd4r\" (UniqueName: \"kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r\") pod \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\" (UID: \"a4e26621-fb49-4397-80c0-e4be8cbc7c41\") " Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.560389 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.564017 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts" (OuterVolumeSpecName: "scripts") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.564213 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.564237 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r" (OuterVolumeSpecName: "kube-api-access-gpd4r") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "kube-api-access-gpd4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.598032 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.661192 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpd4r\" (UniqueName: \"kubernetes.io/projected/a4e26621-fb49-4397-80c0-e4be8cbc7c41-kube-api-access-gpd4r\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.661579 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.661593 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.661606 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a4e26621-fb49-4397-80c0-e4be8cbc7c41-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.661618 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.701521 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.737668 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data" (OuterVolumeSpecName: "config-data") pod "a4e26621-fb49-4397-80c0-e4be8cbc7c41" (UID: "a4e26621-fb49-4397-80c0-e4be8cbc7c41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.762738 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:24 crc kubenswrapper[4955]: I1128 06:40:24.762775 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e26621-fb49-4397-80c0-e4be8cbc7c41-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.397927 4955 generic.go:334] "Generic (PLEG): container finished" podID="071d5666-d995-439f-b56a-c4feb0b11ce1" containerID="539518ae979c619e26be06d94055175af6fc8d73359f9303576a6d8784eec246" exitCode=0 Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.397984 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fc74-account-create-update-zxq7m" event={"ID":"071d5666-d995-439f-b56a-c4feb0b11ce1","Type":"ContainerDied","Data":"539518ae979c619e26be06d94055175af6fc8d73359f9303576a6d8784eec246"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.408247 4955 generic.go:334] "Generic (PLEG): container finished" podID="c059ac08-bd13-4b84-a39f-f1a9e2260b5e" containerID="5875f58846ca9808fcd2e50acb546eadaa6919541781087c149e0d35bfa2ef88" exitCode=0 Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.408386 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" event={"ID":"c059ac08-bd13-4b84-a39f-f1a9e2260b5e","Type":"ContainerDied","Data":"5875f58846ca9808fcd2e50acb546eadaa6919541781087c149e0d35bfa2ef88"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.430500 4955 generic.go:334] "Generic (PLEG): container finished" podID="a3b5c523-cb74-4ad4-b14b-0eefd62138b1" containerID="801c361689dbcfd660a5459c7211b846dfb242c2774d661203a9450d51e680f6" exitCode=0 Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.430635 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j944d" event={"ID":"a3b5c523-cb74-4ad4-b14b-0eefd62138b1","Type":"ContainerDied","Data":"801c361689dbcfd660a5459c7211b846dfb242c2774d661203a9450d51e680f6"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.432943 4955 generic.go:334] "Generic (PLEG): container finished" podID="df327a4d-740c-44af-aeab-e196f406408d" containerID="cbee6232c8e3d77f0e42270111c72c7ffca2f564d4086cd2a6f1fc23522abea1" exitCode=0 Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.433031 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b2j6d" event={"ID":"df327a4d-740c-44af-aeab-e196f406408d","Type":"ContainerDied","Data":"cbee6232c8e3d77f0e42270111c72c7ffca2f564d4086cd2a6f1fc23522abea1"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.436685 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"98ccf66c-347b-4fbe-9b2e-974e15e3eea7","Type":"ContainerStarted","Data":"9d62fb042251cce1630974a8c5f1cdd1832875395868f29818d1fd20b3742a7e"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.439422 4955 generic.go:334] "Generic (PLEG): container finished" podID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerID="349206e3158d352a88674cc03d7ee8e0af33b899d8b967adc29c619293006bb5" exitCode=0 Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.439694 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerDied","Data":"349206e3158d352a88674cc03d7ee8e0af33b899d8b967adc29c619293006bb5"} Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.439925 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.518948 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.340094481 podStartE2EDuration="13.518887816s" podCreationTimestamp="2025-11-28 06:40:12 +0000 UTC" firstStartedPulling="2025-11-28 06:40:12.854969768 +0000 UTC m=+1135.444225348" lastFinishedPulling="2025-11-28 06:40:24.033763113 +0000 UTC m=+1146.623018683" observedRunningTime="2025-11-28 06:40:25.502757467 +0000 UTC m=+1148.092013037" watchObservedRunningTime="2025-11-28 06:40:25.518887816 +0000 UTC m=+1148.108143386" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.520536 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.565662 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.582847 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.593162 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595219 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-log" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595254 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-log" Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595279 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595288 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595317 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-notification-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595324 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-notification-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595337 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595364 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595389 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-central-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595398 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-central-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: E1128 06:40:25.595412 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="sg-core" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595420 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="sg-core" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595678 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595718 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595732 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="sg-core" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595741 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" containerName="glance-log" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595755 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-notification-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.595768 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="ceilometer-central-agent" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.597699 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.600985 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.601179 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.601292 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.658812 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.659696 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5d59886dc-t4pgs" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.677969 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92qd4\" (UniqueName: \"kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678080 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678115 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678156 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678299 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678316 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678338 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.678398 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run\") pod \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\" (UID: \"ebf672dd-567f-4cca-b5c8-7617bb3a02c1\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.694446 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs" (OuterVolumeSpecName: "logs") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.694726 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.696536 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.708059 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts" (OuterVolumeSpecName: "scripts") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.711370 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4" (OuterVolumeSpecName: "kube-api-access-92qd4") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "kube-api-access-92qd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.747599 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.752720 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" path="/var/lib/kubelet/pods/a4e26621-fb49-4397-80c0-e4be8cbc7c41/volumes" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.781792 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.781843 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.781929 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlh9\" (UniqueName: \"kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.781958 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782013 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782059 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782091 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782161 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782175 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782198 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782212 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782225 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.782236 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92qd4\" (UniqueName: \"kubernetes.io/projected/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-kube-api-access-92qd4\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.803001 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.804422 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.810622 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data" (OuterVolumeSpecName: "config-data") pod "ebf672dd-567f-4cca-b5c8-7617bb3a02c1" (UID: "ebf672dd-567f-4cca-b5c8-7617bb3a02c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.837002 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883494 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbjh4\" (UniqueName: \"kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4\") pod \"123962fc-cb22-41e5-92c2-fce487c07003\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts\") pod \"123962fc-cb22-41e5-92c2-fce487c07003\" (UID: \"123962fc-cb22-41e5-92c2-fce487c07003\") " Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883653 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883737 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883756 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883785 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlh9\" (UniqueName: \"kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883804 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883853 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883899 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883910 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ebf672dd-567f-4cca-b5c8-7617bb3a02c1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.883920 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.884738 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.885019 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "123962fc-cb22-41e5-92c2-fce487c07003" (UID: "123962fc-cb22-41e5-92c2-fce487c07003"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.885063 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.888071 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.891163 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4" (OuterVolumeSpecName: "kube-api-access-mbjh4") pod "123962fc-cb22-41e5-92c2-fce487c07003" (UID: "123962fc-cb22-41e5-92c2-fce487c07003"). InnerVolumeSpecName "kube-api-access-mbjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.892085 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.894651 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.900357 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.906619 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlh9\" (UniqueName: \"kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9\") pod \"ceilometer-0\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.917537 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.985539 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbjh4\" (UniqueName: \"kubernetes.io/projected/123962fc-cb22-41e5-92c2-fce487c07003-kube-api-access-mbjh4\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.985575 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/123962fc-cb22-41e5-92c2-fce487c07003-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:25 crc kubenswrapper[4955]: I1128 06:40:25.990255 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.087220 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts\") pod \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.087799 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "530951b8-e0b5-44a3-aaf5-48c74bf91dba" (UID: "530951b8-e0b5-44a3-aaf5-48c74bf91dba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.188013 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkwdr\" (UniqueName: \"kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr\") pod \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\" (UID: \"530951b8-e0b5-44a3-aaf5-48c74bf91dba\") " Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.188364 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/530951b8-e0b5-44a3-aaf5-48c74bf91dba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.192457 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr" (OuterVolumeSpecName: "kube-api-access-rkwdr") pod "530951b8-e0b5-44a3-aaf5-48c74bf91dba" (UID: "530951b8-e0b5-44a3-aaf5-48c74bf91dba"). InnerVolumeSpecName "kube-api-access-rkwdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.289973 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkwdr\" (UniqueName: \"kubernetes.io/projected/530951b8-e0b5-44a3-aaf5-48c74bf91dba-kube-api-access-rkwdr\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.369268 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.450078 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" event={"ID":"530951b8-e0b5-44a3-aaf5-48c74bf91dba","Type":"ContainerDied","Data":"d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b"} Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.450409 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4a3e15e1446388cb604fd8e934fbe9ef2d3cbe58b907bbecefbfdae59053e7b" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.450268 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-32f5-account-create-update-wd6jr" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.451462 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pgx4c" event={"ID":"123962fc-cb22-41e5-92c2-fce487c07003","Type":"ContainerDied","Data":"e0237908e5700c659d23a753b11811ac231fd2b9c3896e82bd9e5bd35e2c9f69"} Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.451495 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0237908e5700c659d23a753b11811ac231fd2b9c3896e82bd9e5bd35e2c9f69" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.451581 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pgx4c" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.453670 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ebf672dd-567f-4cca-b5c8-7617bb3a02c1","Type":"ContainerDied","Data":"e569e662d4ece535c24464715ef22ed5cdd2946a6cffacb42b78354b85d10af1"} Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.453720 4955 scope.go:117] "RemoveContainer" containerID="349206e3158d352a88674cc03d7ee8e0af33b899d8b967adc29c619293006bb5" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.453837 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.456634 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerStarted","Data":"381b00445c40a52136df9e2487940363f61ae7ce33811b40010da78772a2a18c"} Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.485575 4955 scope.go:117] "RemoveContainer" containerID="2791ed070156d863d7468ff0af4b05b241b6873365d1545322d7b4f9d648d74e" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.521991 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.547049 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.565858 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:26 crc kubenswrapper[4955]: E1128 06:40:26.566625 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123962fc-cb22-41e5-92c2-fce487c07003" containerName="mariadb-database-create" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.566659 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="123962fc-cb22-41e5-92c2-fce487c07003" containerName="mariadb-database-create" Nov 28 06:40:26 crc kubenswrapper[4955]: E1128 06:40:26.566702 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="530951b8-e0b5-44a3-aaf5-48c74bf91dba" containerName="mariadb-account-create-update" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.566708 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="530951b8-e0b5-44a3-aaf5-48c74bf91dba" containerName="mariadb-account-create-update" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.567235 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="530951b8-e0b5-44a3-aaf5-48c74bf91dba" containerName="mariadb-account-create-update" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.567256 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="123962fc-cb22-41e5-92c2-fce487c07003" containerName="mariadb-database-create" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.570570 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.572421 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.573519 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.573767 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.700149 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.700202 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.700603 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.700840 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.700998 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd2td\" (UniqueName: \"kubernetes.io/projected/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-kube-api-access-cd2td\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.701031 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-logs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.701072 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.701095 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.802295 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd2td\" (UniqueName: \"kubernetes.io/projected/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-kube-api-access-cd2td\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.802354 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-logs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.802415 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.802442 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.804963 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-logs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.805239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.805275 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.805326 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.805391 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.807405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.807835 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.815304 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.815346 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.816021 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.834453 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.847954 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd2td\" (UniqueName: \"kubernetes.io/projected/02ab7a37-574b-4e32-bc8a-c5dd638a6a45-kube-api-access-cd2td\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.873195 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"02ab7a37-574b-4e32-bc8a-c5dd638a6a45\") " pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.906428 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 06:40:26 crc kubenswrapper[4955]: I1128 06:40:26.977163 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.115850 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph9b5\" (UniqueName: \"kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5\") pod \"071d5666-d995-439f-b56a-c4feb0b11ce1\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.116201 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts\") pod \"071d5666-d995-439f-b56a-c4feb0b11ce1\" (UID: \"071d5666-d995-439f-b56a-c4feb0b11ce1\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.117593 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "071d5666-d995-439f-b56a-c4feb0b11ce1" (UID: "071d5666-d995-439f-b56a-c4feb0b11ce1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.131775 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5" (OuterVolumeSpecName: "kube-api-access-ph9b5") pod "071d5666-d995-439f-b56a-c4feb0b11ce1" (UID: "071d5666-d995-439f-b56a-c4feb0b11ce1"). InnerVolumeSpecName "kube-api-access-ph9b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.218609 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph9b5\" (UniqueName: \"kubernetes.io/projected/071d5666-d995-439f-b56a-c4feb0b11ce1-kube-api-access-ph9b5\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.218637 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/071d5666-d995-439f-b56a-c4feb0b11ce1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.225792 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.243369 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.248901 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422364 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt8lt\" (UniqueName: \"kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt\") pod \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422703 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts\") pod \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\" (UID: \"a3b5c523-cb74-4ad4-b14b-0eefd62138b1\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422763 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts\") pod \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422801 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fp7k\" (UniqueName: \"kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k\") pod \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\" (UID: \"c059ac08-bd13-4b84-a39f-f1a9e2260b5e\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7svxt\" (UniqueName: \"kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt\") pod \"df327a4d-740c-44af-aeab-e196f406408d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.422857 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts\") pod \"df327a4d-740c-44af-aeab-e196f406408d\" (UID: \"df327a4d-740c-44af-aeab-e196f406408d\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.423384 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df327a4d-740c-44af-aeab-e196f406408d" (UID: "df327a4d-740c-44af-aeab-e196f406408d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.423477 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3b5c523-cb74-4ad4-b14b-0eefd62138b1" (UID: "a3b5c523-cb74-4ad4-b14b-0eefd62138b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.423725 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c059ac08-bd13-4b84-a39f-f1a9e2260b5e" (UID: "c059ac08-bd13-4b84-a39f-f1a9e2260b5e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.427612 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt" (OuterVolumeSpecName: "kube-api-access-7svxt") pod "df327a4d-740c-44af-aeab-e196f406408d" (UID: "df327a4d-740c-44af-aeab-e196f406408d"). InnerVolumeSpecName "kube-api-access-7svxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.428009 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt" (OuterVolumeSpecName: "kube-api-access-vt8lt") pod "a3b5c523-cb74-4ad4-b14b-0eefd62138b1" (UID: "a3b5c523-cb74-4ad4-b14b-0eefd62138b1"). InnerVolumeSpecName "kube-api-access-vt8lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.432925 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k" (OuterVolumeSpecName: "kube-api-access-5fp7k") pod "c059ac08-bd13-4b84-a39f-f1a9e2260b5e" (UID: "c059ac08-bd13-4b84-a39f-f1a9e2260b5e"). InnerVolumeSpecName "kube-api-access-5fp7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.469776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b2j6d" event={"ID":"df327a4d-740c-44af-aeab-e196f406408d","Type":"ContainerDied","Data":"526aa47dee46caf9ee0efe06b392a45da00f59f86ad52873f28fdc687cb3f9fa"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.469811 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="526aa47dee46caf9ee0efe06b392a45da00f59f86ad52873f28fdc687cb3f9fa" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.469861 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b2j6d" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.479214 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fc74-account-create-update-zxq7m" event={"ID":"071d5666-d995-439f-b56a-c4feb0b11ce1","Type":"ContainerDied","Data":"ae7024d06f8a54c2a6f1914f4278a0ae7e8dfca3d9b81f23ba7eebf713351994"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.479274 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7024d06f8a54c2a6f1914f4278a0ae7e8dfca3d9b81f23ba7eebf713351994" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.479357 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fc74-account-create-update-zxq7m" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.513548 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" event={"ID":"c059ac08-bd13-4b84-a39f-f1a9e2260b5e","Type":"ContainerDied","Data":"36a6928a75b796286d677dbb1e9a7b69e67094126cda3f7a45c850afb5ea0a37"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.513792 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a6928a75b796286d677dbb1e9a7b69e67094126cda3f7a45c850afb5ea0a37" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.513867 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-38fa-account-create-update-l7s8x" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.518184 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerStarted","Data":"6cb26e5732e241ceea14133576a925c41cf3c49fe7b2ce122ff771367a1d8d5b"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.519580 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j944d" event={"ID":"a3b5c523-cb74-4ad4-b14b-0eefd62138b1","Type":"ContainerDied","Data":"96fe75c176d4c166d5e7b5e6be133726ee3ecffc0e940ae4fc2eea44180e63c1"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.519610 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96fe75c176d4c166d5e7b5e6be133726ee3ecffc0e940ae4fc2eea44180e63c1" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.519720 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j944d" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530175 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fp7k\" (UniqueName: \"kubernetes.io/projected/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-kube-api-access-5fp7k\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530194 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7svxt\" (UniqueName: \"kubernetes.io/projected/df327a4d-740c-44af-aeab-e196f406408d-kube-api-access-7svxt\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530204 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df327a4d-740c-44af-aeab-e196f406408d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530215 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt8lt\" (UniqueName: \"kubernetes.io/projected/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-kube-api-access-vt8lt\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530224 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b5c523-cb74-4ad4-b14b-0eefd62138b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.530234 4955 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c059ac08-bd13-4b84-a39f-f1a9e2260b5e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.538427 4955 generic.go:334] "Generic (PLEG): container finished" podID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerID="d6f084fb3be406b3cfb2874cf1b7986614b2594cb7847501b9fcf7eec3d9a640" exitCode=0 Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.538466 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerDied","Data":"d6f084fb3be406b3cfb2874cf1b7986614b2594cb7847501b9fcf7eec3d9a640"} Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.563999 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.587138 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631320 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631379 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631451 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631475 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631497 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631552 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mpkq\" (UniqueName: \"kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631575 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.631628 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run\") pod \"aed34078-a41e-4dda-bb13-b8dd5379ba91\" (UID: \"aed34078-a41e-4dda-bb13-b8dd5379ba91\") " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.633830 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.634243 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs" (OuterVolumeSpecName: "logs") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.637745 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.642879 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts" (OuterVolumeSpecName: "scripts") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.643170 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq" (OuterVolumeSpecName: "kube-api-access-6mpkq") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "kube-api-access-6mpkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.673654 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.719745 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data" (OuterVolumeSpecName: "config-data") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.724318 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aed34078-a41e-4dda-bb13-b8dd5379ba91" (UID: "aed34078-a41e-4dda-bb13-b8dd5379ba91"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.728009 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf672dd-567f-4cca-b5c8-7617bb3a02c1" path="/var/lib/kubelet/pods/ebf672dd-567f-4cca-b5c8-7617bb3a02c1/volumes" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.733773 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.733935 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734035 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734097 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mpkq\" (UniqueName: \"kubernetes.io/projected/aed34078-a41e-4dda-bb13-b8dd5379ba91-kube-api-access-6mpkq\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734155 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734227 4955 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/aed34078-a41e-4dda-bb13-b8dd5379ba91-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734284 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.734338 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aed34078-a41e-4dda-bb13-b8dd5379ba91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.768647 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 06:40:27 crc kubenswrapper[4955]: I1128 06:40:27.835472 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.128727 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.551011 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerStarted","Data":"82edc3dcd953b69559ffce408a3bab5038f5deae90067f00de216d8e14ea35b1"} Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.556369 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.556378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"aed34078-a41e-4dda-bb13-b8dd5379ba91","Type":"ContainerDied","Data":"5537610aa815d2e73a9e59b4758e0ca9bf904f91db9bfd1be6cebbe76d6bcc0f"} Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.556438 4955 scope.go:117] "RemoveContainer" containerID="d6f084fb3be406b3cfb2874cf1b7986614b2594cb7847501b9fcf7eec3d9a640" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.560283 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02ab7a37-574b-4e32-bc8a-c5dd638a6a45","Type":"ContainerStarted","Data":"8887199cc600705630a71eb18b33e083f54e064b9a0431c187c6be4b097d7a37"} Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.560346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02ab7a37-574b-4e32-bc8a-c5dd638a6a45","Type":"ContainerStarted","Data":"ae2a452f81bbffe67bec1b506ce2f17323a1047c8b47b63716b411921b8d4535"} Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.581864 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.600764 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.611634 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.611983 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c059ac08-bd13-4b84-a39f-f1a9e2260b5e" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.611998 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c059ac08-bd13-4b84-a39f-f1a9e2260b5e" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.612014 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-httpd" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612021 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-httpd" Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.612031 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071d5666-d995-439f-b56a-c4feb0b11ce1" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612037 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="071d5666-d995-439f-b56a-c4feb0b11ce1" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.612052 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-log" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612057 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-log" Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.612072 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df327a4d-740c-44af-aeab-e196f406408d" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612079 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="df327a4d-740c-44af-aeab-e196f406408d" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: E1128 06:40:28.612090 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b5c523-cb74-4ad4-b14b-0eefd62138b1" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612095 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b5c523-cb74-4ad4-b14b-0eefd62138b1" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612258 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="df327a4d-740c-44af-aeab-e196f406408d" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612274 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c059ac08-bd13-4b84-a39f-f1a9e2260b5e" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612286 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-httpd" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612295 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" containerName="glance-log" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612305 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b5c523-cb74-4ad4-b14b-0eefd62138b1" containerName="mariadb-database-create" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.612310 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="071d5666-d995-439f-b56a-c4feb0b11ce1" containerName="mariadb-account-create-update" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.613204 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.615654 4955 scope.go:117] "RemoveContainer" containerID="c16dad1f89126d44dfe03e59d9908d81ed918475b03bdd78dcfc01052b9f88e4" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.619800 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.619950 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.647055 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.756742 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.756817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.756853 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vkdx\" (UniqueName: \"kubernetes.io/projected/104ece36-bc05-45c5-984c-55d61b6ebe8b-kube-api-access-7vkdx\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.756981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.757003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.757034 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.757057 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.757110 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859051 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859328 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859481 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859765 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859962 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.860135 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.860335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.860529 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-logs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.860540 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vkdx\" (UniqueName: \"kubernetes.io/projected/104ece36-bc05-45c5-984c-55d61b6ebe8b-kube-api-access-7vkdx\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.859835 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.861306 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/104ece36-bc05-45c5-984c-55d61b6ebe8b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.864163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.865661 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.865756 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.873332 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/104ece36-bc05-45c5-984c-55d61b6ebe8b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.878258 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vkdx\" (UniqueName: \"kubernetes.io/projected/104ece36-bc05-45c5-984c-55d61b6ebe8b-kube-api-access-7vkdx\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.895780 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"104ece36-bc05-45c5-984c-55d61b6ebe8b\") " pod="openstack/glance-default-internal-api-0" Nov 28 06:40:28 crc kubenswrapper[4955]: I1128 06:40:28.951122 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.446609 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.494038 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 06:40:29 crc kubenswrapper[4955]: W1128 06:40:29.501595 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104ece36_bc05_45c5_984c_55d61b6ebe8b.slice/crio-3317af77658d2c15692a469e5dd8d4c60e45c9430a0cb9bc098def036f122cf1 WatchSource:0}: Error finding container 3317af77658d2c15692a469e5dd8d4c60e45c9430a0cb9bc098def036f122cf1: Status 404 returned error can't find the container with id 3317af77658d2c15692a469e5dd8d4c60e45c9430a0cb9bc098def036f122cf1 Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577432 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-454p8\" (UniqueName: \"kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577679 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577713 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577756 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577822 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.577877 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.578115 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts\") pod \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\" (UID: \"f3a8eb88-043f-44ca-8b8c-68288a2045d9\") " Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.578692 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs" (OuterVolumeSpecName: "logs") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.583481 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"104ece36-bc05-45c5-984c-55d61b6ebe8b","Type":"ContainerStarted","Data":"3317af77658d2c15692a469e5dd8d4c60e45c9430a0cb9bc098def036f122cf1"} Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.583802 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.583978 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8" (OuterVolumeSpecName: "kube-api-access-454p8") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "kube-api-access-454p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.588660 4955 generic.go:334] "Generic (PLEG): container finished" podID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerID="04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26" exitCode=137 Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.588712 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerDied","Data":"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26"} Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.588737 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-9c465b4d8-cslvv" event={"ID":"f3a8eb88-043f-44ca-8b8c-68288a2045d9","Type":"ContainerDied","Data":"0f649e74350aab47b349880065239a1b6b55e80aed6a5dcf364a2c85f1a3d45e"} Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.588753 4955 scope.go:117] "RemoveContainer" containerID="0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.588844 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-9c465b4d8-cslvv" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.604137 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts" (OuterVolumeSpecName: "scripts") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.606242 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerStarted","Data":"bf7391c80c590d0572624bb5e8cee3d4f06a8c8f8b18da81d00d0f9a3309ed4e"} Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.607089 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data" (OuterVolumeSpecName: "config-data") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.612422 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"02ab7a37-574b-4e32-bc8a-c5dd638a6a45","Type":"ContainerStarted","Data":"befbf2c95380c3602079c779f4cc1d68477995c624b3e032c439acd195474cea"} Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.613242 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.633876 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f3a8eb88-043f-44ca-8b8c-68288a2045d9" (UID: "f3a8eb88-043f-44ca-8b8c-68288a2045d9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.638187 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.6381695990000003 podStartE2EDuration="3.638169599s" podCreationTimestamp="2025-11-28 06:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:29.633371578 +0000 UTC m=+1152.222627158" watchObservedRunningTime="2025-11-28 06:40:29.638169599 +0000 UTC m=+1152.227425169" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680516 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680553 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-454p8\" (UniqueName: \"kubernetes.io/projected/f3a8eb88-043f-44ca-8b8c-68288a2045d9-kube-api-access-454p8\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680564 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680573 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f3a8eb88-043f-44ca-8b8c-68288a2045d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680582 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a8eb88-043f-44ca-8b8c-68288a2045d9-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680590 4955 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.680598 4955 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3a8eb88-043f-44ca-8b8c-68288a2045d9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.714427 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aed34078-a41e-4dda-bb13-b8dd5379ba91" path="/var/lib/kubelet/pods/aed34078-a41e-4dda-bb13-b8dd5379ba91/volumes" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.774162 4955 scope.go:117] "RemoveContainer" containerID="04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.794630 4955 scope.go:117] "RemoveContainer" containerID="0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2" Nov 28 06:40:29 crc kubenswrapper[4955]: E1128 06:40:29.795079 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2\": container with ID starting with 0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2 not found: ID does not exist" containerID="0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.795161 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2"} err="failed to get container status \"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2\": rpc error: code = NotFound desc = could not find container \"0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2\": container with ID starting with 0776a74cfb5fcde3a0435511128528f5ec05b6bdf74512be1447127b0deb7cf2 not found: ID does not exist" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.795195 4955 scope.go:117] "RemoveContainer" containerID="04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26" Nov 28 06:40:29 crc kubenswrapper[4955]: E1128 06:40:29.795778 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26\": container with ID starting with 04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26 not found: ID does not exist" containerID="04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.795812 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26"} err="failed to get container status \"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26\": rpc error: code = NotFound desc = could not find container \"04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26\": container with ID starting with 04a17982aba97b47dc0b52e2f2ade4860977dc91a96a302f4be55a1e59a9cc26 not found: ID does not exist" Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.967673 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:40:29 crc kubenswrapper[4955]: I1128 06:40:29.978673 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-9c465b4d8-cslvv"] Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.638941 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"104ece36-bc05-45c5-984c-55d61b6ebe8b","Type":"ContainerStarted","Data":"c118d84064f6f302fe33d1780d3b53fec67b949832b754a2c0c120158ce7f889"} Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.677882 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t2nkh"] Nov 28 06:40:30 crc kubenswrapper[4955]: E1128 06:40:30.678286 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.678303 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" Nov 28 06:40:30 crc kubenswrapper[4955]: E1128 06:40:30.678317 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon-log" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.678324 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon-log" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.678496 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.678524 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" containerName="horizon-log" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.679168 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.681794 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.681974 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.682140 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4qf5f" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.693189 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t2nkh"] Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.802373 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9jq\" (UniqueName: \"kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.802471 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.802589 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.802616 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.903761 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl9jq\" (UniqueName: \"kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.904195 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.904243 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.904281 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.910733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.912023 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.920837 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:30 crc kubenswrapper[4955]: I1128 06:40:30.923159 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl9jq\" (UniqueName: \"kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq\") pod \"nova-cell0-conductor-db-sync-t2nkh\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.006277 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.453845 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t2nkh"] Nov 28 06:40:31 crc kubenswrapper[4955]: W1128 06:40:31.459718 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92884a73_5a0a_4a22_9919_03c0c4c6829d.slice/crio-ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2 WatchSource:0}: Error finding container ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2: Status 404 returned error can't find the container with id ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2 Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.655793 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"104ece36-bc05-45c5-984c-55d61b6ebe8b","Type":"ContainerStarted","Data":"89c32490e7c4133d97fb89f80ae3f55bf45966c08a2cb407001bd44a0e7942d6"} Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerStarted","Data":"2cd16b4a24cda0e2c285f98a7cd0f10b58a3cc4cfcde2078f5d786e672d30ee5"} Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658730 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-central-agent" containerID="cri-o://6cb26e5732e241ceea14133576a925c41cf3c49fe7b2ce122ff771367a1d8d5b" gracePeriod=30 Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658760 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="proxy-httpd" containerID="cri-o://2cd16b4a24cda0e2c285f98a7cd0f10b58a3cc4cfcde2078f5d786e672d30ee5" gracePeriod=30 Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658737 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="sg-core" containerID="cri-o://bf7391c80c590d0572624bb5e8cee3d4f06a8c8f8b18da81d00d0f9a3309ed4e" gracePeriod=30 Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658750 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.658775 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-notification-agent" containerID="cri-o://82edc3dcd953b69559ffce408a3bab5038f5deae90067f00de216d8e14ea35b1" gracePeriod=30 Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.660094 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" event={"ID":"92884a73-5a0a-4a22-9919-03c0c4c6829d","Type":"ContainerStarted","Data":"ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2"} Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.706161 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.706144576 podStartE2EDuration="3.706144576s" podCreationTimestamp="2025-11-28 06:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:31.687585981 +0000 UTC m=+1154.276841541" watchObservedRunningTime="2025-11-28 06:40:31.706144576 +0000 UTC m=+1154.295400146" Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.709304 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5403310770000003 podStartE2EDuration="6.709296622s" podCreationTimestamp="2025-11-28 06:40:25 +0000 UTC" firstStartedPulling="2025-11-28 06:40:26.375713788 +0000 UTC m=+1148.964969358" lastFinishedPulling="2025-11-28 06:40:30.544679333 +0000 UTC m=+1153.133934903" observedRunningTime="2025-11-28 06:40:31.703246608 +0000 UTC m=+1154.292502198" watchObservedRunningTime="2025-11-28 06:40:31.709296622 +0000 UTC m=+1154.298552192" Nov 28 06:40:31 crc kubenswrapper[4955]: I1128 06:40:31.714809 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a8eb88-043f-44ca-8b8c-68288a2045d9" path="/var/lib/kubelet/pods/f3a8eb88-043f-44ca-8b8c-68288a2045d9/volumes" Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.679991 4955 generic.go:334] "Generic (PLEG): container finished" podID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerID="2cd16b4a24cda0e2c285f98a7cd0f10b58a3cc4cfcde2078f5d786e672d30ee5" exitCode=0 Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.680292 4955 generic.go:334] "Generic (PLEG): container finished" podID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerID="bf7391c80c590d0572624bb5e8cee3d4f06a8c8f8b18da81d00d0f9a3309ed4e" exitCode=2 Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.680301 4955 generic.go:334] "Generic (PLEG): container finished" podID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerID="82edc3dcd953b69559ffce408a3bab5038f5deae90067f00de216d8e14ea35b1" exitCode=0 Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.680025 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerDied","Data":"2cd16b4a24cda0e2c285f98a7cd0f10b58a3cc4cfcde2078f5d786e672d30ee5"} Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.680367 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerDied","Data":"bf7391c80c590d0572624bb5e8cee3d4f06a8c8f8b18da81d00d0f9a3309ed4e"} Nov 28 06:40:32 crc kubenswrapper[4955]: I1128 06:40:32.680409 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerDied","Data":"82edc3dcd953b69559ffce408a3bab5038f5deae90067f00de216d8e14ea35b1"} Nov 28 06:40:35 crc kubenswrapper[4955]: I1128 06:40:35.707263 4955 generic.go:334] "Generic (PLEG): container finished" podID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerID="6cb26e5732e241ceea14133576a925c41cf3c49fe7b2ce122ff771367a1d8d5b" exitCode=0 Nov 28 06:40:35 crc kubenswrapper[4955]: I1128 06:40:35.716803 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerDied","Data":"6cb26e5732e241ceea14133576a925c41cf3c49fe7b2ce122ff771367a1d8d5b"} Nov 28 06:40:36 crc kubenswrapper[4955]: I1128 06:40:36.907270 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 06:40:36 crc kubenswrapper[4955]: I1128 06:40:36.907624 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 06:40:36 crc kubenswrapper[4955]: I1128 06:40:36.951605 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 06:40:36 crc kubenswrapper[4955]: I1128 06:40:36.964219 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 06:40:37 crc kubenswrapper[4955]: I1128 06:40:37.736284 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:40:37 crc kubenswrapper[4955]: I1128 06:40:37.736637 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 06:40:38 crc kubenswrapper[4955]: I1128 06:40:38.952137 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:38 crc kubenswrapper[4955]: I1128 06:40:38.952479 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:38 crc kubenswrapper[4955]: I1128 06:40:38.998615 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.008766 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.140269 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.264853 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtlh9\" (UniqueName: \"kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265207 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265271 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265304 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265354 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265376 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265442 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd\") pod \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\" (UID: \"553a2f31-da45-4d1e-86dc-c94585a5a1a5\") " Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.265999 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.274859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.274933 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9" (OuterVolumeSpecName: "kube-api-access-qtlh9") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "kube-api-access-qtlh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.290718 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts" (OuterVolumeSpecName: "scripts") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.338890 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.368819 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtlh9\" (UniqueName: \"kubernetes.io/projected/553a2f31-da45-4d1e-86dc-c94585a5a1a5-kube-api-access-qtlh9\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.368850 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.368860 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.368885 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.368894 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/553a2f31-da45-4d1e-86dc-c94585a5a1a5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.402012 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.472532 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.522525 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data" (OuterVolumeSpecName: "config-data") pod "553a2f31-da45-4d1e-86dc-c94585a5a1a5" (UID: "553a2f31-da45-4d1e-86dc-c94585a5a1a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.574476 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/553a2f31-da45-4d1e-86dc-c94585a5a1a5-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.766747 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"553a2f31-da45-4d1e-86dc-c94585a5a1a5","Type":"ContainerDied","Data":"381b00445c40a52136df9e2487940363f61ae7ce33811b40010da78772a2a18c"} Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.766795 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.766804 4955 scope.go:117] "RemoveContainer" containerID="2cd16b4a24cda0e2c285f98a7cd0f10b58a3cc4cfcde2078f5d786e672d30ee5" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.768477 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" event={"ID":"92884a73-5a0a-4a22-9919-03c0c4c6829d","Type":"ContainerStarted","Data":"7ac7e9d6ded59c025760d3b72852c742c1ae6dc0820eb1b020a6ac5545342105"} Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.768701 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.768738 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.788486 4955 scope.go:117] "RemoveContainer" containerID="bf7391c80c590d0572624bb5e8cee3d4f06a8c8f8b18da81d00d0f9a3309ed4e" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.792341 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.798929 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.804163 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" podStartSLOduration=2.253369922 podStartE2EDuration="9.804146116s" podCreationTimestamp="2025-11-28 06:40:30 +0000 UTC" firstStartedPulling="2025-11-28 06:40:31.46233975 +0000 UTC m=+1154.051595310" lastFinishedPulling="2025-11-28 06:40:39.013115934 +0000 UTC m=+1161.602371504" observedRunningTime="2025-11-28 06:40:39.8013543 +0000 UTC m=+1162.390609870" watchObservedRunningTime="2025-11-28 06:40:39.804146116 +0000 UTC m=+1162.393401696" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.813519 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.813637 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.814177 4955 scope.go:117] "RemoveContainer" containerID="82edc3dcd953b69559ffce408a3bab5038f5deae90067f00de216d8e14ea35b1" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.837831 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:39 crc kubenswrapper[4955]: E1128 06:40:39.838199 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="proxy-httpd" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838216 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="proxy-httpd" Nov 28 06:40:39 crc kubenswrapper[4955]: E1128 06:40:39.838229 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-notification-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838236 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-notification-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: E1128 06:40:39.838255 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-central-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838261 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-central-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: E1128 06:40:39.838283 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="sg-core" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838289 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="sg-core" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838456 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-notification-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838465 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="ceilometer-central-agent" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838480 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="proxy-httpd" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.838515 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" containerName="sg-core" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.840930 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.849452 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.867332 4955 scope.go:117] "RemoveContainer" containerID="6cb26e5732e241ceea14133576a925c41cf3c49fe7b2ce122ff771367a1d8d5b" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.868128 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.868305 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:40:39 crc kubenswrapper[4955]: I1128 06:40:39.944687 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086072 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086117 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086202 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cnlw\" (UniqueName: \"kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086221 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086252 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086269 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.086307 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187746 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cnlw\" (UniqueName: \"kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187881 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187904 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.187920 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.188466 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.188588 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.193024 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.196193 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.197008 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.203169 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cnlw\" (UniqueName: \"kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.203383 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts\") pod \"ceilometer-0\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.233539 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.682614 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:40 crc kubenswrapper[4955]: I1128 06:40:40.778472 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerStarted","Data":"f405757a6e939b7f347eec0447f88c19d31a2a8c1d0d20252f1206bd714f1d66"} Nov 28 06:40:41 crc kubenswrapper[4955]: I1128 06:40:41.208572 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:41 crc kubenswrapper[4955]: I1128 06:40:41.746176 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553a2f31-da45-4d1e-86dc-c94585a5a1a5" path="/var/lib/kubelet/pods/553a2f31-da45-4d1e-86dc-c94585a5a1a5/volumes" Nov 28 06:40:41 crc kubenswrapper[4955]: I1128 06:40:41.804527 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerStarted","Data":"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600"} Nov 28 06:40:41 crc kubenswrapper[4955]: I1128 06:40:41.990625 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:41 crc kubenswrapper[4955]: I1128 06:40:41.990736 4955 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 06:40:42 crc kubenswrapper[4955]: I1128 06:40:42.219380 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 06:40:42 crc kubenswrapper[4955]: I1128 06:40:42.816827 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerStarted","Data":"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d"} Nov 28 06:40:43 crc kubenswrapper[4955]: I1128 06:40:43.825606 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerStarted","Data":"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91"} Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.871891 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerStarted","Data":"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d"} Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.872622 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.872208 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-central-agent" containerID="cri-o://b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600" gracePeriod=30 Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.872648 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="proxy-httpd" containerID="cri-o://8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d" gracePeriod=30 Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.872763 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="sg-core" containerID="cri-o://6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91" gracePeriod=30 Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.872741 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-notification-agent" containerID="cri-o://322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d" gracePeriod=30 Nov 28 06:40:47 crc kubenswrapper[4955]: I1128 06:40:47.897687 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837291827 podStartE2EDuration="8.897666673s" podCreationTimestamp="2025-11-28 06:40:39 +0000 UTC" firstStartedPulling="2025-11-28 06:40:40.693565255 +0000 UTC m=+1163.282820825" lastFinishedPulling="2025-11-28 06:40:46.753940101 +0000 UTC m=+1169.343195671" observedRunningTime="2025-11-28 06:40:47.892089661 +0000 UTC m=+1170.481345241" watchObservedRunningTime="2025-11-28 06:40:47.897666673 +0000 UTC m=+1170.486922253" Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884115 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerID="8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d" exitCode=0 Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884456 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerID="6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91" exitCode=2 Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884472 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerID="322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d" exitCode=0 Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884166 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerDied","Data":"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d"} Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884549 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerDied","Data":"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91"} Nov 28 06:40:48 crc kubenswrapper[4955]: I1128 06:40:48.884577 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerDied","Data":"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d"} Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.509162 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.679928 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680031 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680126 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680223 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680286 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680746 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680832 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.680979 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cnlw\" (UniqueName: \"kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.681537 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts\") pod \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\" (UID: \"b5ffc8e4-468d-4e80-9ab0-581842caf6b2\") " Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.682191 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.682224 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.686000 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw" (OuterVolumeSpecName: "kube-api-access-8cnlw") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "kube-api-access-8cnlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.694451 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts" (OuterVolumeSpecName: "scripts") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.706669 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.768204 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.784262 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.785170 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.785207 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cnlw\" (UniqueName: \"kubernetes.io/projected/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-kube-api-access-8cnlw\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.785221 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.800646 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data" (OuterVolumeSpecName: "config-data") pod "b5ffc8e4-468d-4e80-9ab0-581842caf6b2" (UID: "b5ffc8e4-468d-4e80-9ab0-581842caf6b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.887846 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5ffc8e4-468d-4e80-9ab0-581842caf6b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.920859 4955 generic.go:334] "Generic (PLEG): container finished" podID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerID="b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600" exitCode=0 Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.920935 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.920959 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerDied","Data":"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600"} Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.921579 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5ffc8e4-468d-4e80-9ab0-581842caf6b2","Type":"ContainerDied","Data":"f405757a6e939b7f347eec0447f88c19d31a2a8c1d0d20252f1206bd714f1d66"} Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.921660 4955 scope.go:117] "RemoveContainer" containerID="8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.941514 4955 scope.go:117] "RemoveContainer" containerID="6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.962345 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.963348 4955 scope.go:117] "RemoveContainer" containerID="322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.971065 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989005 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:52 crc kubenswrapper[4955]: E1128 06:40:52.989398 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="sg-core" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989415 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="sg-core" Nov 28 06:40:52 crc kubenswrapper[4955]: E1128 06:40:52.989428 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-notification-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989435 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-notification-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: E1128 06:40:52.989447 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="proxy-httpd" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989454 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="proxy-httpd" Nov 28 06:40:52 crc kubenswrapper[4955]: E1128 06:40:52.989484 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-central-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989493 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-central-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989708 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="sg-core" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989731 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="proxy-httpd" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989743 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-notification-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.989769 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" containerName="ceilometer-central-agent" Nov 28 06:40:52 crc kubenswrapper[4955]: I1128 06:40:52.991440 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.000111 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.004934 4955 scope.go:117] "RemoveContainer" containerID="b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.005384 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.005576 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.036924 4955 scope.go:117] "RemoveContainer" containerID="8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d" Nov 28 06:40:53 crc kubenswrapper[4955]: E1128 06:40:53.037423 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d\": container with ID starting with 8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d not found: ID does not exist" containerID="8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.037455 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d"} err="failed to get container status \"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d\": rpc error: code = NotFound desc = could not find container \"8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d\": container with ID starting with 8c6f2d3a537ecf0dcf39f09774dcf6efe69c0391f616763cd3ae389267ca336d not found: ID does not exist" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.037477 4955 scope.go:117] "RemoveContainer" containerID="6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91" Nov 28 06:40:53 crc kubenswrapper[4955]: E1128 06:40:53.038009 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91\": container with ID starting with 6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91 not found: ID does not exist" containerID="6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.038050 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91"} err="failed to get container status \"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91\": rpc error: code = NotFound desc = could not find container \"6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91\": container with ID starting with 6c1db063446aab69f83d27d1c85239fd54e8619084702e2f09d519434fd9fd91 not found: ID does not exist" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.038078 4955 scope.go:117] "RemoveContainer" containerID="322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d" Nov 28 06:40:53 crc kubenswrapper[4955]: E1128 06:40:53.038532 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d\": container with ID starting with 322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d not found: ID does not exist" containerID="322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.038572 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d"} err="failed to get container status \"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d\": rpc error: code = NotFound desc = could not find container \"322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d\": container with ID starting with 322b95cc3844fcbbb2cfdb42611f4805ee2920302b302eba42a331147b03255d not found: ID does not exist" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.038602 4955 scope.go:117] "RemoveContainer" containerID="b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600" Nov 28 06:40:53 crc kubenswrapper[4955]: E1128 06:40:53.039457 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600\": container with ID starting with b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600 not found: ID does not exist" containerID="b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.039484 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600"} err="failed to get container status \"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600\": rpc error: code = NotFound desc = could not find container \"b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600\": container with ID starting with b2905d15fed396fa20e59545a342be16195bbb28ad4f202a31f1c908c15f5600 not found: ID does not exist" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192153 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192213 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192259 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnqbz\" (UniqueName: \"kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192359 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.192387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.193386 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295128 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295192 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295234 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295261 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnqbz\" (UniqueName: \"kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295342 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295373 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295412 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295833 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.295885 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.301170 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.301256 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.308232 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.308987 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.323341 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnqbz\" (UniqueName: \"kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz\") pod \"ceilometer-0\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.336236 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.717006 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ffc8e4-468d-4e80-9ab0-581842caf6b2" path="/var/lib/kubelet/pods/b5ffc8e4-468d-4e80-9ab0-581842caf6b2/volumes" Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.826077 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.939956 4955 generic.go:334] "Generic (PLEG): container finished" podID="92884a73-5a0a-4a22-9919-03c0c4c6829d" containerID="7ac7e9d6ded59c025760d3b72852c742c1ae6dc0820eb1b020a6ac5545342105" exitCode=0 Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.940085 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" event={"ID":"92884a73-5a0a-4a22-9919-03c0c4c6829d","Type":"ContainerDied","Data":"7ac7e9d6ded59c025760d3b72852c742c1ae6dc0820eb1b020a6ac5545342105"} Nov 28 06:40:53 crc kubenswrapper[4955]: I1128 06:40:53.948063 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerStarted","Data":"b81e2e7359bc9113fe04f1f64f418a6710cc6c7baf15fceddcced95795e91ae6"} Nov 28 06:40:54 crc kubenswrapper[4955]: I1128 06:40:54.300306 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="a4e26621-fb49-4397-80c0-e4be8cbc7c41" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:40:54 crc kubenswrapper[4955]: I1128 06:40:54.972278 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerStarted","Data":"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b"} Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.266142 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.338650 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data\") pod \"92884a73-5a0a-4a22-9919-03c0c4c6829d\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.338845 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle\") pod \"92884a73-5a0a-4a22-9919-03c0c4c6829d\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.339112 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts\") pod \"92884a73-5a0a-4a22-9919-03c0c4c6829d\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.340306 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl9jq\" (UniqueName: \"kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq\") pod \"92884a73-5a0a-4a22-9919-03c0c4c6829d\" (UID: \"92884a73-5a0a-4a22-9919-03c0c4c6829d\") " Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.343118 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts" (OuterVolumeSpecName: "scripts") pod "92884a73-5a0a-4a22-9919-03c0c4c6829d" (UID: "92884a73-5a0a-4a22-9919-03c0c4c6829d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.344223 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq" (OuterVolumeSpecName: "kube-api-access-sl9jq") pod "92884a73-5a0a-4a22-9919-03c0c4c6829d" (UID: "92884a73-5a0a-4a22-9919-03c0c4c6829d"). InnerVolumeSpecName "kube-api-access-sl9jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.373713 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92884a73-5a0a-4a22-9919-03c0c4c6829d" (UID: "92884a73-5a0a-4a22-9919-03c0c4c6829d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.374587 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data" (OuterVolumeSpecName: "config-data") pod "92884a73-5a0a-4a22-9919-03c0c4c6829d" (UID: "92884a73-5a0a-4a22-9919-03c0c4c6829d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.442622 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl9jq\" (UniqueName: \"kubernetes.io/projected/92884a73-5a0a-4a22-9919-03c0c4c6829d-kube-api-access-sl9jq\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.442663 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.442675 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.442685 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92884a73-5a0a-4a22-9919-03c0c4c6829d-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.988305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" event={"ID":"92884a73-5a0a-4a22-9919-03c0c4c6829d","Type":"ContainerDied","Data":"ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2"} Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.989718 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba31a3149a28875eeb5a2f91d1ff8a854f882e7a5ca900634224d83040ec18c2" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.990694 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-t2nkh" Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.993444 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerStarted","Data":"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2"} Nov 28 06:40:55 crc kubenswrapper[4955]: I1128 06:40:55.993493 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerStarted","Data":"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9"} Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.074683 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 06:40:56 crc kubenswrapper[4955]: E1128 06:40:56.075033 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92884a73-5a0a-4a22-9919-03c0c4c6829d" containerName="nova-cell0-conductor-db-sync" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.075052 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="92884a73-5a0a-4a22-9919-03c0c4c6829d" containerName="nova-cell0-conductor-db-sync" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.075254 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="92884a73-5a0a-4a22-9919-03c0c4c6829d" containerName="nova-cell0-conductor-db-sync" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.075840 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.077819 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4qf5f" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.078297 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.086047 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.265022 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.265338 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8b9f\" (UniqueName: \"kubernetes.io/projected/1effeb5c-c81a-43ff-8624-9c077f2484a3-kube-api-access-p8b9f\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.265531 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.367297 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.367944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.368144 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8b9f\" (UniqueName: \"kubernetes.io/projected/1effeb5c-c81a-43ff-8624-9c077f2484a3-kube-api-access-p8b9f\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.381564 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.387261 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1effeb5c-c81a-43ff-8624-9c077f2484a3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.388992 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8b9f\" (UniqueName: \"kubernetes.io/projected/1effeb5c-c81a-43ff-8624-9c077f2484a3-kube-api-access-p8b9f\") pod \"nova-cell0-conductor-0\" (UID: \"1effeb5c-c81a-43ff-8624-9c077f2484a3\") " pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.393078 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:56 crc kubenswrapper[4955]: I1128 06:40:56.887372 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 06:40:57 crc kubenswrapper[4955]: I1128 06:40:57.006534 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1effeb5c-c81a-43ff-8624-9c077f2484a3","Type":"ContainerStarted","Data":"e5d328894db1d914309ec8965483b5eddebe9d232b8b417589b03dec2a5b9b27"} Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.024950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1effeb5c-c81a-43ff-8624-9c077f2484a3","Type":"ContainerStarted","Data":"e042ef0e611c364e289bac6cbef02223582757cc5f0946ed78e1c07032ff0e05"} Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.027942 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.034729 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerStarted","Data":"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5"} Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.035157 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.071323 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.071295929 podStartE2EDuration="2.071295929s" podCreationTimestamp="2025-11-28 06:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:40:58.056157397 +0000 UTC m=+1180.645413027" watchObservedRunningTime="2025-11-28 06:40:58.071295929 +0000 UTC m=+1180.660551539" Nov 28 06:40:58 crc kubenswrapper[4955]: I1128 06:40:58.095225 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.622393592 podStartE2EDuration="6.095196689s" podCreationTimestamp="2025-11-28 06:40:52 +0000 UTC" firstStartedPulling="2025-11-28 06:40:53.836612634 +0000 UTC m=+1176.425868204" lastFinishedPulling="2025-11-28 06:40:57.309415731 +0000 UTC m=+1179.898671301" observedRunningTime="2025-11-28 06:40:58.093320658 +0000 UTC m=+1180.682576238" watchObservedRunningTime="2025-11-28 06:40:58.095196689 +0000 UTC m=+1180.684452309" Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.433245 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.936212 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qsxnq"] Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.937551 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.939060 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.939279 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 06:41:06 crc kubenswrapper[4955]: I1128 06:41:06.953423 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qsxnq"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.002036 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.002105 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.002140 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s4fz\" (UniqueName: \"kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.002194 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.076423 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.079805 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.082899 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.092778 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103544 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103591 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s4fz\" (UniqueName: \"kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103693 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103762 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc2s7\" (UniqueName: \"kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.103817 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.111732 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.112475 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.121633 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.131450 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s4fz\" (UniqueName: \"kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz\") pod \"nova-cell0-cell-mapping-qsxnq\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.196008 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.197556 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.200773 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.205448 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.205556 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc2s7\" (UniqueName: \"kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.205598 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.215412 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.222349 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.222866 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.257299 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc2s7\" (UniqueName: \"kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7\") pod \"nova-cell1-novncproxy-0\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.273886 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.289249 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.290453 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.295318 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.312159 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330052 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm47l\" (UniqueName: \"kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330099 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330129 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330158 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330207 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330235 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.330287 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5kd\" (UniqueName: \"kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.354586 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.365303 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.366914 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.366942 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.371971 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.399109 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.423005 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431350 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5kd\" (UniqueName: \"kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431389 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431433 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm47l\" (UniqueName: \"kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431455 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431499 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431530 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431560 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431578 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.431612 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlddp\" (UniqueName: \"kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437009 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437081 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437124 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2ft9\" (UniqueName: \"kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437154 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437171 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437199 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.437240 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.436938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.440771 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.441804 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.443295 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.457623 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.462268 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5kd\" (UniqueName: \"kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.468848 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data\") pod \"nova-metadata-0\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.469739 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm47l\" (UniqueName: \"kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l\") pod \"nova-scheduler-0\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.490214 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538622 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538706 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538739 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538780 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlddp\" (UniqueName: \"kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538811 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538852 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538885 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2ft9\" (UniqueName: \"kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538919 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.538944 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.539039 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.540566 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.541452 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.541590 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.542074 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.545231 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.545293 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.545426 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.548547 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.563599 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlddp\" (UniqueName: \"kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp\") pod \"dnsmasq-dns-845d6d6f59-4lg9n\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.568050 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2ft9\" (UniqueName: \"kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9\") pod \"nova-api-0\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.751085 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.798723 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.826629 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:07 crc kubenswrapper[4955]: I1128 06:41:07.957364 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qsxnq"] Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.056257 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:08 crc kubenswrapper[4955]: W1128 06:41:08.057035 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeeac2ca0_d495_4644_a7d9_8a57dfd01cb8.slice/crio-4c8826973117edfd42f3f5b0135d7aed9bad01ff402fe9284977000a72449e29 WatchSource:0}: Error finding container 4c8826973117edfd42f3f5b0135d7aed9bad01ff402fe9284977000a72449e29: Status 404 returned error can't find the container with id 4c8826973117edfd42f3f5b0135d7aed9bad01ff402fe9284977000a72449e29 Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.069880 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xkk9"] Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.072079 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.076841 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.077052 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.092483 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xkk9"] Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.124623 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.150884 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90978c6f-38fd-4b9e-83e9-18a3082fe2fa","Type":"ContainerStarted","Data":"396d4410040ac5045c38732fb346427852b9cb3e11bc7f2444eb5b580102785c"} Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.153660 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.153747 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.153785 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.153817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dktht\" (UniqueName: \"kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.154375 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8","Type":"ContainerStarted","Data":"4c8826973117edfd42f3f5b0135d7aed9bad01ff402fe9284977000a72449e29"} Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.158180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qsxnq" event={"ID":"59b859bc-9a75-493a-8a9e-7712775f51c9","Type":"ContainerStarted","Data":"668c1da1914e10177740c7020c73683b9df86118008d26ef635ae38a1a51ef7b"} Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.251909 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:08 crc kubenswrapper[4955]: W1128 06:41:08.254905 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86fc4e9b_6818_4655_a10f_f8563fd9cd24.slice/crio-89dfe6f3b121cb5de97ee3f7bd4b47ba2f79342e30bc59d5f020423cf67441e0 WatchSource:0}: Error finding container 89dfe6f3b121cb5de97ee3f7bd4b47ba2f79342e30bc59d5f020423cf67441e0: Status 404 returned error can't find the container with id 89dfe6f3b121cb5de97ee3f7bd4b47ba2f79342e30bc59d5f020423cf67441e0 Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.255107 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dktht\" (UniqueName: \"kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.255230 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.255338 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.255384 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.262449 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.262923 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.269078 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.275464 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dktht\" (UniqueName: \"kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht\") pod \"nova-cell1-conductor-db-sync-2xkk9\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.368348 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:08 crc kubenswrapper[4955]: W1128 06:41:08.387077 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fbfd74e_5e89_4167_917b_f66827f7d0de.slice/crio-d0e404944497bcc86f302962b726b44ec8ed82ca76a9abdcde73c16e30c67803 WatchSource:0}: Error finding container d0e404944497bcc86f302962b726b44ec8ed82ca76a9abdcde73c16e30c67803: Status 404 returned error can't find the container with id d0e404944497bcc86f302962b726b44ec8ed82ca76a9abdcde73c16e30c67803 Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.413079 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.454524 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:08 crc kubenswrapper[4955]: W1128 06:41:08.466005 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c0fc60a_bc19_418c_8a9c_8cf6aa10afea.slice/crio-f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f WatchSource:0}: Error finding container f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f: Status 404 returned error can't find the container with id f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f Nov 28 06:41:08 crc kubenswrapper[4955]: I1128 06:41:08.686140 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xkk9"] Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.170223 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerStarted","Data":"d0e404944497bcc86f302962b726b44ec8ed82ca76a9abdcde73c16e30c67803"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.172490 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerStarted","Data":"89dfe6f3b121cb5de97ee3f7bd4b47ba2f79342e30bc59d5f020423cf67441e0"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.175053 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerID="a5b9cbd98792fd41eaa48ca2272d1f94b24c00ab056c48e97309f0014e39e28e" exitCode=0 Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.175104 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" event={"ID":"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea","Type":"ContainerDied","Data":"a5b9cbd98792fd41eaa48ca2272d1f94b24c00ab056c48e97309f0014e39e28e"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.175124 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" event={"ID":"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea","Type":"ContainerStarted","Data":"f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.177860 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" event={"ID":"7aa3a596-0803-435d-8362-7b80edd615cd","Type":"ContainerStarted","Data":"68d4d5223f6eeaf7850d9c12c4e08f234f20850eff14c4503a4d6dd45471e815"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.177886 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" event={"ID":"7aa3a596-0803-435d-8362-7b80edd615cd","Type":"ContainerStarted","Data":"5c18631ff4d592417dc8d6401017a2ef3b838e2a9178c7661879385e4cbb8761"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.182049 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qsxnq" event={"ID":"59b859bc-9a75-493a-8a9e-7712775f51c9","Type":"ContainerStarted","Data":"275494a6e9fea0ad62ca766e238b226733177049809e65a1415c691888b09c19"} Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.229606 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qsxnq" podStartSLOduration=3.229589117 podStartE2EDuration="3.229589117s" podCreationTimestamp="2025-11-28 06:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:09.219984576 +0000 UTC m=+1191.809240146" watchObservedRunningTime="2025-11-28 06:41:09.229589117 +0000 UTC m=+1191.818844677" Nov 28 06:41:09 crc kubenswrapper[4955]: I1128 06:41:09.248623 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" podStartSLOduration=1.248607755 podStartE2EDuration="1.248607755s" podCreationTimestamp="2025-11-28 06:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:09.233534935 +0000 UTC m=+1191.822790505" watchObservedRunningTime="2025-11-28 06:41:09.248607755 +0000 UTC m=+1191.837863315" Nov 28 06:41:10 crc kubenswrapper[4955]: I1128 06:41:10.202789 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" event={"ID":"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea","Type":"ContainerStarted","Data":"01b04c868eacb805b05d8fd55275cd2af4e8e5e60540db128aff93bac204f619"} Nov 28 06:41:10 crc kubenswrapper[4955]: I1128 06:41:10.235185 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" podStartSLOduration=3.235159478 podStartE2EDuration="3.235159478s" podCreationTimestamp="2025-11-28 06:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:10.226877342 +0000 UTC m=+1192.816132912" watchObservedRunningTime="2025-11-28 06:41:10.235159478 +0000 UTC m=+1192.824415038" Nov 28 06:41:10 crc kubenswrapper[4955]: I1128 06:41:10.961235 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:10 crc kubenswrapper[4955]: I1128 06:41:10.971412 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:11 crc kubenswrapper[4955]: I1128 06:41:11.210265 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.220300 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerStarted","Data":"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.220746 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerStarted","Data":"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.222334 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90978c6f-38fd-4b9e-83e9-18a3082fe2fa","Type":"ContainerStarted","Data":"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.222465 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1" gracePeriod=30 Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.230489 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerStarted","Data":"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.230551 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerStarted","Data":"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.230687 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-log" containerID="cri-o://35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" gracePeriod=30 Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.231002 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-metadata" containerID="cri-o://1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" gracePeriod=30 Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.238860 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8","Type":"ContainerStarted","Data":"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310"} Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.249966 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.353085959 podStartE2EDuration="5.249950128s" podCreationTimestamp="2025-11-28 06:41:07 +0000 UTC" firstStartedPulling="2025-11-28 06:41:08.391028002 +0000 UTC m=+1190.980283572" lastFinishedPulling="2025-11-28 06:41:11.287892171 +0000 UTC m=+1193.877147741" observedRunningTime="2025-11-28 06:41:12.246254787 +0000 UTC m=+1194.835510387" watchObservedRunningTime="2025-11-28 06:41:12.249950128 +0000 UTC m=+1194.839205708" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.276042 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.054706767 podStartE2EDuration="5.276018947s" podCreationTimestamp="2025-11-28 06:41:07 +0000 UTC" firstStartedPulling="2025-11-28 06:41:08.059969041 +0000 UTC m=+1190.649224611" lastFinishedPulling="2025-11-28 06:41:11.281281221 +0000 UTC m=+1193.870536791" observedRunningTime="2025-11-28 06:41:12.273778696 +0000 UTC m=+1194.863034286" watchObservedRunningTime="2025-11-28 06:41:12.276018947 +0000 UTC m=+1194.865274527" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.378954 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.346990713 podStartE2EDuration="5.378936219s" podCreationTimestamp="2025-11-28 06:41:07 +0000 UTC" firstStartedPulling="2025-11-28 06:41:08.25684047 +0000 UTC m=+1190.846096040" lastFinishedPulling="2025-11-28 06:41:11.288785976 +0000 UTC m=+1193.878041546" observedRunningTime="2025-11-28 06:41:12.346163107 +0000 UTC m=+1194.935418687" watchObservedRunningTime="2025-11-28 06:41:12.378936219 +0000 UTC m=+1194.968191789" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.379227 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.189791595 podStartE2EDuration="5.379222477s" podCreationTimestamp="2025-11-28 06:41:07 +0000 UTC" firstStartedPulling="2025-11-28 06:41:08.091331865 +0000 UTC m=+1190.680587425" lastFinishedPulling="2025-11-28 06:41:11.280762737 +0000 UTC m=+1193.870018307" observedRunningTime="2025-11-28 06:41:12.312772688 +0000 UTC m=+1194.902028258" watchObservedRunningTime="2025-11-28 06:41:12.379222477 +0000 UTC m=+1194.968478047" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.402678 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.491117 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.751140 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:12 crc kubenswrapper[4955]: I1128 06:41:12.751189 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.245042 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251540 4955 generic.go:334] "Generic (PLEG): container finished" podID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerID="1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" exitCode=0 Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251574 4955 generic.go:334] "Generic (PLEG): container finished" podID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerID="35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" exitCode=143 Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251877 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerDied","Data":"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360"} Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251915 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerDied","Data":"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea"} Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"86fc4e9b-6818-4655-a10f-f8563fd9cd24","Type":"ContainerDied","Data":"89dfe6f3b121cb5de97ee3f7bd4b47ba2f79342e30bc59d5f020423cf67441e0"} Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.251941 4955 scope.go:117] "RemoveContainer" containerID="1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.252117 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.280964 4955 scope.go:117] "RemoveContainer" containerID="35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.306770 4955 scope.go:117] "RemoveContainer" containerID="1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" Nov 28 06:41:13 crc kubenswrapper[4955]: E1128 06:41:13.307621 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360\": container with ID starting with 1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360 not found: ID does not exist" containerID="1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.307667 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360"} err="failed to get container status \"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360\": rpc error: code = NotFound desc = could not find container \"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360\": container with ID starting with 1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360 not found: ID does not exist" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.307692 4955 scope.go:117] "RemoveContainer" containerID="35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" Nov 28 06:41:13 crc kubenswrapper[4955]: E1128 06:41:13.309940 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea\": container with ID starting with 35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea not found: ID does not exist" containerID="35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.309980 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea"} err="failed to get container status \"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea\": rpc error: code = NotFound desc = could not find container \"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea\": container with ID starting with 35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea not found: ID does not exist" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.310007 4955 scope.go:117] "RemoveContainer" containerID="1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.310391 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360"} err="failed to get container status \"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360\": rpc error: code = NotFound desc = could not find container \"1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360\": container with ID starting with 1ce00501a9419eca164f8498c4adabd3cfefa4f47573e55958df46fd1c905360 not found: ID does not exist" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.310411 4955 scope.go:117] "RemoveContainer" containerID="35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.310668 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea"} err="failed to get container status \"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea\": rpc error: code = NotFound desc = could not find container \"35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea\": container with ID starting with 35eca44b858eee75ede63745b400eecc87bfe30593350fc47d8c822f65def7ea not found: ID does not exist" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.377344 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs\") pod \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.377424 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data\") pod \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.377589 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle\") pod \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.377652 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj5kd\" (UniqueName: \"kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd\") pod \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\" (UID: \"86fc4e9b-6818-4655-a10f-f8563fd9cd24\") " Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.379049 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs" (OuterVolumeSpecName: "logs") pod "86fc4e9b-6818-4655-a10f-f8563fd9cd24" (UID: "86fc4e9b-6818-4655-a10f-f8563fd9cd24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.385379 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd" (OuterVolumeSpecName: "kube-api-access-tj5kd") pod "86fc4e9b-6818-4655-a10f-f8563fd9cd24" (UID: "86fc4e9b-6818-4655-a10f-f8563fd9cd24"). InnerVolumeSpecName "kube-api-access-tj5kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.407043 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data" (OuterVolumeSpecName: "config-data") pod "86fc4e9b-6818-4655-a10f-f8563fd9cd24" (UID: "86fc4e9b-6818-4655-a10f-f8563fd9cd24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.439289 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86fc4e9b-6818-4655-a10f-f8563fd9cd24" (UID: "86fc4e9b-6818-4655-a10f-f8563fd9cd24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.480255 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.480286 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86fc4e9b-6818-4655-a10f-f8563fd9cd24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.480299 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj5kd\" (UniqueName: \"kubernetes.io/projected/86fc4e9b-6818-4655-a10f-f8563fd9cd24-kube-api-access-tj5kd\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.480311 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86fc4e9b-6818-4655-a10f-f8563fd9cd24-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.604781 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.623185 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.638696 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:13 crc kubenswrapper[4955]: E1128 06:41:13.639145 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-metadata" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.639161 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-metadata" Nov 28 06:41:13 crc kubenswrapper[4955]: E1128 06:41:13.639182 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-log" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.639189 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-log" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.639426 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-log" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.639452 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" containerName="nova-metadata-metadata" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.640705 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.646659 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.650859 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.651606 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.683367 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.683449 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.683585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.683626 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.683717 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lzmw\" (UniqueName: \"kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.718520 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fc4e9b-6818-4655-a10f-f8563fd9cd24" path="/var/lib/kubelet/pods/86fc4e9b-6818-4655-a10f-f8563fd9cd24/volumes" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785309 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785388 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lzmw\" (UniqueName: \"kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785488 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785539 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.785860 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.789440 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.789545 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.791492 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.810243 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lzmw\" (UniqueName: \"kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw\") pod \"nova-metadata-0\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " pod="openstack/nova-metadata-0" Nov 28 06:41:13 crc kubenswrapper[4955]: I1128 06:41:13.967129 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:14 crc kubenswrapper[4955]: W1128 06:41:14.458640 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b509ad_495f_4145_b5fe_dc0ac7cd699b.slice/crio-6caf685d876afde483a5988ca2448b0e9fbfc0b3bc51a2ebd15fcb90252e9bdf WatchSource:0}: Error finding container 6caf685d876afde483a5988ca2448b0e9fbfc0b3bc51a2ebd15fcb90252e9bdf: Status 404 returned error can't find the container with id 6caf685d876afde483a5988ca2448b0e9fbfc0b3bc51a2ebd15fcb90252e9bdf Nov 28 06:41:14 crc kubenswrapper[4955]: I1128 06:41:14.459948 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:15 crc kubenswrapper[4955]: I1128 06:41:15.276318 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerStarted","Data":"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505"} Nov 28 06:41:15 crc kubenswrapper[4955]: I1128 06:41:15.276659 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerStarted","Data":"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e"} Nov 28 06:41:15 crc kubenswrapper[4955]: I1128 06:41:15.276673 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerStarted","Data":"6caf685d876afde483a5988ca2448b0e9fbfc0b3bc51a2ebd15fcb90252e9bdf"} Nov 28 06:41:15 crc kubenswrapper[4955]: I1128 06:41:15.293563 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.293540411 podStartE2EDuration="2.293540411s" podCreationTimestamp="2025-11-28 06:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:15.293261973 +0000 UTC m=+1197.882517563" watchObservedRunningTime="2025-11-28 06:41:15.293540411 +0000 UTC m=+1197.882795971" Nov 28 06:41:16 crc kubenswrapper[4955]: I1128 06:41:16.292830 4955 generic.go:334] "Generic (PLEG): container finished" podID="59b859bc-9a75-493a-8a9e-7712775f51c9" containerID="275494a6e9fea0ad62ca766e238b226733177049809e65a1415c691888b09c19" exitCode=0 Nov 28 06:41:16 crc kubenswrapper[4955]: I1128 06:41:16.292927 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qsxnq" event={"ID":"59b859bc-9a75-493a-8a9e-7712775f51c9","Type":"ContainerDied","Data":"275494a6e9fea0ad62ca766e238b226733177049809e65a1415c691888b09c19"} Nov 28 06:41:16 crc kubenswrapper[4955]: I1128 06:41:16.296895 4955 generic.go:334] "Generic (PLEG): container finished" podID="7aa3a596-0803-435d-8362-7b80edd615cd" containerID="68d4d5223f6eeaf7850d9c12c4e08f234f20850eff14c4503a4d6dd45471e815" exitCode=0 Nov 28 06:41:16 crc kubenswrapper[4955]: I1128 06:41:16.297037 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" event={"ID":"7aa3a596-0803-435d-8362-7b80edd615cd","Type":"ContainerDied","Data":"68d4d5223f6eeaf7850d9c12c4e08f234f20850eff14c4503a4d6dd45471e815"} Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.491163 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.527769 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.765406 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.770893 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.799497 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.799979 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.829806 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872170 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle\") pod \"59b859bc-9a75-493a-8a9e-7712775f51c9\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872236 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle\") pod \"7aa3a596-0803-435d-8362-7b80edd615cd\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872343 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s4fz\" (UniqueName: \"kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz\") pod \"59b859bc-9a75-493a-8a9e-7712775f51c9\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872381 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts\") pod \"59b859bc-9a75-493a-8a9e-7712775f51c9\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872476 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dktht\" (UniqueName: \"kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht\") pod \"7aa3a596-0803-435d-8362-7b80edd615cd\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872523 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data\") pod \"59b859bc-9a75-493a-8a9e-7712775f51c9\" (UID: \"59b859bc-9a75-493a-8a9e-7712775f51c9\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872574 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts\") pod \"7aa3a596-0803-435d-8362-7b80edd615cd\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.872650 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data\") pod \"7aa3a596-0803-435d-8362-7b80edd615cd\" (UID: \"7aa3a596-0803-435d-8362-7b80edd615cd\") " Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.881031 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts" (OuterVolumeSpecName: "scripts") pod "7aa3a596-0803-435d-8362-7b80edd615cd" (UID: "7aa3a596-0803-435d-8362-7b80edd615cd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.881106 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht" (OuterVolumeSpecName: "kube-api-access-dktht") pod "7aa3a596-0803-435d-8362-7b80edd615cd" (UID: "7aa3a596-0803-435d-8362-7b80edd615cd"). InnerVolumeSpecName "kube-api-access-dktht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.886369 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts" (OuterVolumeSpecName: "scripts") pod "59b859bc-9a75-493a-8a9e-7712775f51c9" (UID: "59b859bc-9a75-493a-8a9e-7712775f51c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.891159 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz" (OuterVolumeSpecName: "kube-api-access-9s4fz") pod "59b859bc-9a75-493a-8a9e-7712775f51c9" (UID: "59b859bc-9a75-493a-8a9e-7712775f51c9"). InnerVolumeSpecName "kube-api-access-9s4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.915412 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data" (OuterVolumeSpecName: "config-data") pod "59b859bc-9a75-493a-8a9e-7712775f51c9" (UID: "59b859bc-9a75-493a-8a9e-7712775f51c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.927976 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.928194 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="dnsmasq-dns" containerID="cri-o://a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c" gracePeriod=10 Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.930311 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aa3a596-0803-435d-8362-7b80edd615cd" (UID: "7aa3a596-0803-435d-8362-7b80edd615cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.933389 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data" (OuterVolumeSpecName: "config-data") pod "7aa3a596-0803-435d-8362-7b80edd615cd" (UID: "7aa3a596-0803-435d-8362-7b80edd615cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.962485 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59b859bc-9a75-493a-8a9e-7712775f51c9" (UID: "59b859bc-9a75-493a-8a9e-7712775f51c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974328 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s4fz\" (UniqueName: \"kubernetes.io/projected/59b859bc-9a75-493a-8a9e-7712775f51c9-kube-api-access-9s4fz\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974360 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974369 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dktht\" (UniqueName: \"kubernetes.io/projected/7aa3a596-0803-435d-8362-7b80edd615cd-kube-api-access-dktht\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974377 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974386 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974396 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974404 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59b859bc-9a75-493a-8a9e-7712775f51c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:17 crc kubenswrapper[4955]: I1128 06:41:17.974413 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aa3a596-0803-435d-8362-7b80edd615cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: E1128 06:41:18.086220 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c66787d_973a_4cfa_8cde_75249495ec65.slice/crio-a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c.scope\": RecentStats: unable to find data in memory cache]" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.326638 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qsxnq" event={"ID":"59b859bc-9a75-493a-8a9e-7712775f51c9","Type":"ContainerDied","Data":"668c1da1914e10177740c7020c73683b9df86118008d26ef635ae38a1a51ef7b"} Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.326962 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668c1da1914e10177740c7020c73683b9df86118008d26ef635ae38a1a51ef7b" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.327074 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qsxnq" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.328422 4955 generic.go:334] "Generic (PLEG): container finished" podID="7c66787d-973a-4cfa-8cde-75249495ec65" containerID="a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c" exitCode=0 Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.328460 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" event={"ID":"7c66787d-973a-4cfa-8cde-75249495ec65","Type":"ContainerDied","Data":"a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c"} Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.332387 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.337713 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-2xkk9" event={"ID":"7aa3a596-0803-435d-8362-7b80edd615cd","Type":"ContainerDied","Data":"5c18631ff4d592417dc8d6401017a2ef3b838e2a9178c7661879385e4cbb8761"} Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.337771 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c18631ff4d592417dc8d6401017a2ef3b838e2a9178c7661879385e4cbb8761" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.376224 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.387070 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.443829 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 06:41:18 crc kubenswrapper[4955]: E1128 06:41:18.444430 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="dnsmasq-dns" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.444441 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="dnsmasq-dns" Nov 28 06:41:18 crc kubenswrapper[4955]: E1128 06:41:18.444471 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b859bc-9a75-493a-8a9e-7712775f51c9" containerName="nova-manage" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.444477 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b859bc-9a75-493a-8a9e-7712775f51c9" containerName="nova-manage" Nov 28 06:41:18 crc kubenswrapper[4955]: E1128 06:41:18.444498 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aa3a596-0803-435d-8362-7b80edd615cd" containerName="nova-cell1-conductor-db-sync" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.444519 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aa3a596-0803-435d-8362-7b80edd615cd" containerName="nova-cell1-conductor-db-sync" Nov 28 06:41:18 crc kubenswrapper[4955]: E1128 06:41:18.444540 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="init" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.444546 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="init" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.445379 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b859bc-9a75-493a-8a9e-7712775f51c9" containerName="nova-manage" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.445445 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aa3a596-0803-435d-8362-7b80edd615cd" containerName="nova-cell1-conductor-db-sync" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.445472 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="dnsmasq-dns" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.446735 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.458491 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.485782 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.489928 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490009 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490106 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490155 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490188 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tj7d\" (UniqueName: \"kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490221 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config\") pod \"7c66787d-973a-4cfa-8cde-75249495ec65\" (UID: \"7c66787d-973a-4cfa-8cde-75249495ec65\") " Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490699 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvp2\" (UniqueName: \"kubernetes.io/projected/3d076178-18c0-42af-b40b-3cc8f1cb77cb-kube-api-access-fdvp2\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490732 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.490877 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.495891 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d" (OuterVolumeSpecName: "kube-api-access-5tj7d") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "kube-api-access-5tj7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.540714 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.550852 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.568991 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config" (OuterVolumeSpecName: "config") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.573036 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.573247 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-log" containerID="cri-o://2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" gracePeriod=30 Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.573641 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-metadata" containerID="cri-o://435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" gracePeriod=30 Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.590001 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594120 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdvp2\" (UniqueName: \"kubernetes.io/projected/3d076178-18c0-42af-b40b-3cc8f1cb77cb-kube-api-access-fdvp2\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594157 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594251 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594330 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594341 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tj7d\" (UniqueName: \"kubernetes.io/projected/7c66787d-973a-4cfa-8cde-75249495ec65-kube-api-access-5tj7d\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594351 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.594359 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.600283 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.602783 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d076178-18c0-42af-b40b-3cc8f1cb77cb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.603414 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.603942 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c66787d-973a-4cfa-8cde-75249495ec65" (UID: "7c66787d-973a-4cfa-8cde-75249495ec65"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.612596 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdvp2\" (UniqueName: \"kubernetes.io/projected/3d076178-18c0-42af-b40b-3cc8f1cb77cb-kube-api-access-fdvp2\") pod \"nova-cell1-conductor-0\" (UID: \"3d076178-18c0-42af-b40b-3cc8f1cb77cb\") " pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.696371 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.696398 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c66787d-973a-4cfa-8cde-75249495ec65-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.791891 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.885706 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.885706 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.941541 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.967372 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:18 crc kubenswrapper[4955]: I1128 06:41:18.967651 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.146691 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.207647 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data\") pod \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.207721 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lzmw\" (UniqueName: \"kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw\") pod \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.207831 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs\") pod \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.207879 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs\") pod \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.207925 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle\") pod \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\" (UID: \"74b509ad-495f-4145-b5fe-dc0ac7cd699b\") " Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.210396 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs" (OuterVolumeSpecName: "logs") pod "74b509ad-495f-4145-b5fe-dc0ac7cd699b" (UID: "74b509ad-495f-4145-b5fe-dc0ac7cd699b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.213029 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw" (OuterVolumeSpecName: "kube-api-access-9lzmw") pod "74b509ad-495f-4145-b5fe-dc0ac7cd699b" (UID: "74b509ad-495f-4145-b5fe-dc0ac7cd699b"). InnerVolumeSpecName "kube-api-access-9lzmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.234967 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74b509ad-495f-4145-b5fe-dc0ac7cd699b" (UID: "74b509ad-495f-4145-b5fe-dc0ac7cd699b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.236396 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data" (OuterVolumeSpecName: "config-data") pod "74b509ad-495f-4145-b5fe-dc0ac7cd699b" (UID: "74b509ad-495f-4145-b5fe-dc0ac7cd699b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.262386 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "74b509ad-495f-4145-b5fe-dc0ac7cd699b" (UID: "74b509ad-495f-4145-b5fe-dc0ac7cd699b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.313924 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lzmw\" (UniqueName: \"kubernetes.io/projected/74b509ad-495f-4145-b5fe-dc0ac7cd699b-kube-api-access-9lzmw\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.313960 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.313970 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74b509ad-495f-4145-b5fe-dc0ac7cd699b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.313982 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.313992 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b509ad-495f-4145-b5fe-dc0ac7cd699b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.318358 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 06:41:19 crc kubenswrapper[4955]: W1128 06:41:19.318465 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d076178_18c0_42af_b40b_3cc8f1cb77cb.slice/crio-5907476c5a642caeb853b5346366cd0ef29f3df5756f865beb49e830d605642f WatchSource:0}: Error finding container 5907476c5a642caeb853b5346366cd0ef29f3df5756f865beb49e830d605642f: Status 404 returned error can't find the container with id 5907476c5a642caeb853b5346366cd0ef29f3df5756f865beb49e830d605642f Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.346975 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d076178-18c0-42af-b40b-3cc8f1cb77cb","Type":"ContainerStarted","Data":"5907476c5a642caeb853b5346366cd0ef29f3df5756f865beb49e830d605642f"} Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.353620 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" event={"ID":"7c66787d-973a-4cfa-8cde-75249495ec65","Type":"ContainerDied","Data":"49a0d25408f27cd8450cf7c74b8a532bf1f145641db3c2e55bbcf26eda1b5672"} Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.353671 4955 scope.go:117] "RemoveContainer" containerID="a96e1209f09fb52b067748a941531ba0c6040e7291e9dfb20a78424b9a9fc13c" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.353679 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357446 4955 generic.go:334] "Generic (PLEG): container finished" podID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerID="435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" exitCode=0 Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357481 4955 generic.go:334] "Generic (PLEG): container finished" podID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerID="2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" exitCode=143 Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357489 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357530 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerDied","Data":"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505"} Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357563 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerDied","Data":"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e"} Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.357576 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"74b509ad-495f-4145-b5fe-dc0ac7cd699b","Type":"ContainerDied","Data":"6caf685d876afde483a5988ca2448b0e9fbfc0b3bc51a2ebd15fcb90252e9bdf"} Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.358132 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-log" containerID="cri-o://9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e" gracePeriod=30 Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.358654 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-api" containerID="cri-o://0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44" gracePeriod=30 Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.402621 4955 scope.go:117] "RemoveContainer" containerID="2347d7cd31267602455505c48197a991eb6eb1b5439a112df740581505ba3565" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.405053 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.419709 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-2cdsc"] Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.435607 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.446037 4955 scope.go:117] "RemoveContainer" containerID="435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.472047 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.488968 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:19 crc kubenswrapper[4955]: E1128 06:41:19.489520 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-metadata" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.489539 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-metadata" Nov 28 06:41:19 crc kubenswrapper[4955]: E1128 06:41:19.489556 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-log" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.489563 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-log" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.489806 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-log" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.489839 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" containerName="nova-metadata-metadata" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.490853 4955 scope.go:117] "RemoveContainer" containerID="2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.491043 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.493845 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.493926 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.502882 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.540680 4955 scope.go:117] "RemoveContainer" containerID="435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" Nov 28 06:41:19 crc kubenswrapper[4955]: E1128 06:41:19.546618 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505\": container with ID starting with 435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505 not found: ID does not exist" containerID="435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.546665 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505"} err="failed to get container status \"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505\": rpc error: code = NotFound desc = could not find container \"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505\": container with ID starting with 435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505 not found: ID does not exist" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.546698 4955 scope.go:117] "RemoveContainer" containerID="2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" Nov 28 06:41:19 crc kubenswrapper[4955]: E1128 06:41:19.549967 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e\": container with ID starting with 2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e not found: ID does not exist" containerID="2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.550026 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e"} err="failed to get container status \"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e\": rpc error: code = NotFound desc = could not find container \"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e\": container with ID starting with 2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e not found: ID does not exist" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.550056 4955 scope.go:117] "RemoveContainer" containerID="435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.552821 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505"} err="failed to get container status \"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505\": rpc error: code = NotFound desc = could not find container \"435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505\": container with ID starting with 435ebe59432b1ee72f718617c56bee7ceb8c40cc789c54a2de089799fda9a505 not found: ID does not exist" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.552850 4955 scope.go:117] "RemoveContainer" containerID="2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.556861 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e"} err="failed to get container status \"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e\": rpc error: code = NotFound desc = could not find container \"2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e\": container with ID starting with 2d22a5ccb0ac745c9df7ea8a4afe824542b41909ccb30d545da624f03c65b08e not found: ID does not exist" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.620827 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsm5\" (UniqueName: \"kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.620996 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.621059 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.621107 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.621208 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.713762 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b509ad-495f-4145-b5fe-dc0ac7cd699b" path="/var/lib/kubelet/pods/74b509ad-495f-4145-b5fe-dc0ac7cd699b/volumes" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.714402 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" path="/var/lib/kubelet/pods/7c66787d-973a-4cfa-8cde-75249495ec65/volumes" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723163 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723213 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723239 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723284 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsm5\" (UniqueName: \"kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.723966 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.727270 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.728910 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.737114 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.749637 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsm5\" (UniqueName: \"kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5\") pod \"nova-metadata-0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " pod="openstack/nova-metadata-0" Nov 28 06:41:19 crc kubenswrapper[4955]: I1128 06:41:19.842371 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:41:20 crc kubenswrapper[4955]: W1128 06:41:20.340189 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a9f8816_6c9f_4c8c_8f34_80fbdaeeb9b0.slice/crio-c7f2a94984d753017e817ec136be218728bbd1baac22eea7eb3e0c500aee84c1 WatchSource:0}: Error finding container c7f2a94984d753017e817ec136be218728bbd1baac22eea7eb3e0c500aee84c1: Status 404 returned error can't find the container with id c7f2a94984d753017e817ec136be218728bbd1baac22eea7eb3e0c500aee84c1 Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.342036 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.367990 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerID="9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e" exitCode=143 Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.368052 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerDied","Data":"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e"} Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.369704 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d076178-18c0-42af-b40b-3cc8f1cb77cb","Type":"ContainerStarted","Data":"bfe50df9059ed608220fec57c53913147c62050753922f52da3c60ba6d139e74"} Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.370816 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.372143 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerStarted","Data":"c7f2a94984d753017e817ec136be218728bbd1baac22eea7eb3e0c500aee84c1"} Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.374494 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerName="nova-scheduler-scheduler" containerID="cri-o://4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" gracePeriod=30 Nov 28 06:41:20 crc kubenswrapper[4955]: I1128 06:41:20.395190 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.395176132 podStartE2EDuration="2.395176132s" podCreationTimestamp="2025-11-28 06:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:20.390027702 +0000 UTC m=+1202.979283292" watchObservedRunningTime="2025-11-28 06:41:20.395176132 +0000 UTC m=+1202.984431712" Nov 28 06:41:21 crc kubenswrapper[4955]: I1128 06:41:21.395979 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerStarted","Data":"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d"} Nov 28 06:41:21 crc kubenswrapper[4955]: I1128 06:41:21.396344 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerStarted","Data":"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041"} Nov 28 06:41:21 crc kubenswrapper[4955]: I1128 06:41:21.431283 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.431264523 podStartE2EDuration="2.431264523s" podCreationTimestamp="2025-11-28 06:41:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:21.416634344 +0000 UTC m=+1204.005889954" watchObservedRunningTime="2025-11-28 06:41:21.431264523 +0000 UTC m=+1204.020520103" Nov 28 06:41:22 crc kubenswrapper[4955]: E1128 06:41:22.492931 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:41:22 crc kubenswrapper[4955]: E1128 06:41:22.495701 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:41:22 crc kubenswrapper[4955]: E1128 06:41:22.499865 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:41:22 crc kubenswrapper[4955]: E1128 06:41:22.499929 4955 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerName="nova-scheduler-scheduler" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.239877 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5784cf869f-2cdsc" podUID="7c66787d-973a-4cfa-8cde-75249495ec65" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.344015 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.401460 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.423606 4955 generic.go:334] "Generic (PLEG): container finished" podID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" exitCode=0 Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.423655 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8","Type":"ContainerDied","Data":"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310"} Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.423684 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8","Type":"ContainerDied","Data":"4c8826973117edfd42f3f5b0135d7aed9bad01ff402fe9284977000a72449e29"} Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.423703 4955 scope.go:117] "RemoveContainer" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.423829 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.452626 4955 scope.go:117] "RemoveContainer" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" Nov 28 06:41:23 crc kubenswrapper[4955]: E1128 06:41:23.455147 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310\": container with ID starting with 4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310 not found: ID does not exist" containerID="4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.455185 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310"} err="failed to get container status \"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310\": rpc error: code = NotFound desc = could not find container \"4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310\": container with ID starting with 4689e733d000fd85fc82089fa21ff6ed2f1d064a5a4c68fc826882afca9bc310 not found: ID does not exist" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.512167 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle\") pod \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.512214 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm47l\" (UniqueName: \"kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l\") pod \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.512257 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data\") pod \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\" (UID: \"eeac2ca0-d495-4644-a7d9-8a57dfd01cb8\") " Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.521777 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l" (OuterVolumeSpecName: "kube-api-access-pm47l") pod "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" (UID: "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8"). InnerVolumeSpecName "kube-api-access-pm47l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.537112 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data" (OuterVolumeSpecName: "config-data") pod "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" (UID: "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.550265 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" (UID: "eeac2ca0-d495-4644-a7d9-8a57dfd01cb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.614930 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.614964 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm47l\" (UniqueName: \"kubernetes.io/projected/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-kube-api-access-pm47l\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.614975 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.759200 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.770906 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.785009 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:23 crc kubenswrapper[4955]: E1128 06:41:23.785483 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerName="nova-scheduler-scheduler" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.785522 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerName="nova-scheduler-scheduler" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.785740 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" containerName="nova-scheduler-scheduler" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.786519 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.789691 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.798468 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.820459 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.820585 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.820655 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6zv\" (UniqueName: \"kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.924625 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.924698 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.924745 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6zv\" (UniqueName: \"kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.929225 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.938052 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:23 crc kubenswrapper[4955]: I1128 06:41:23.956124 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6zv\" (UniqueName: \"kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv\") pod \"nova-scheduler-0\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " pod="openstack/nova-scheduler-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.149785 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.292538 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.334222 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs\") pod \"2fbfd74e-5e89-4167-917b-f66827f7d0de\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.334321 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2ft9\" (UniqueName: \"kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9\") pod \"2fbfd74e-5e89-4167-917b-f66827f7d0de\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.334539 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle\") pod \"2fbfd74e-5e89-4167-917b-f66827f7d0de\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.334611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data\") pod \"2fbfd74e-5e89-4167-917b-f66827f7d0de\" (UID: \"2fbfd74e-5e89-4167-917b-f66827f7d0de\") " Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.341638 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs" (OuterVolumeSpecName: "logs") pod "2fbfd74e-5e89-4167-917b-f66827f7d0de" (UID: "2fbfd74e-5e89-4167-917b-f66827f7d0de"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.351837 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9" (OuterVolumeSpecName: "kube-api-access-q2ft9") pod "2fbfd74e-5e89-4167-917b-f66827f7d0de" (UID: "2fbfd74e-5e89-4167-917b-f66827f7d0de"). InnerVolumeSpecName "kube-api-access-q2ft9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.381678 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fbfd74e-5e89-4167-917b-f66827f7d0de" (UID: "2fbfd74e-5e89-4167-917b-f66827f7d0de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.438825 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data" (OuterVolumeSpecName: "config-data") pod "2fbfd74e-5e89-4167-917b-f66827f7d0de" (UID: "2fbfd74e-5e89-4167-917b-f66827f7d0de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.463055 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2ft9\" (UniqueName: \"kubernetes.io/projected/2fbfd74e-5e89-4167-917b-f66827f7d0de-kube-api-access-q2ft9\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.463096 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.463111 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fbfd74e-5e89-4167-917b-f66827f7d0de-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.463123 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fbfd74e-5e89-4167-917b-f66827f7d0de-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.475163 4955 generic.go:334] "Generic (PLEG): container finished" podID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerID="0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44" exitCode=0 Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.475400 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerDied","Data":"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44"} Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.475435 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2fbfd74e-5e89-4167-917b-f66827f7d0de","Type":"ContainerDied","Data":"d0e404944497bcc86f302962b726b44ec8ed82ca76a9abdcde73c16e30c67803"} Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.475453 4955 scope.go:117] "RemoveContainer" containerID="0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.475389 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.516426 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.528757 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.528929 4955 scope.go:117] "RemoveContainer" containerID="9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.538881 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:24 crc kubenswrapper[4955]: E1128 06:41:24.539529 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-api" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.539601 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-api" Nov 28 06:41:24 crc kubenswrapper[4955]: E1128 06:41:24.539685 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-log" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.539741 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-log" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.539978 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-api" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.540060 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" containerName="nova-api-log" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.541168 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.544195 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.553553 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.572431 4955 scope.go:117] "RemoveContainer" containerID="0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44" Nov 28 06:41:24 crc kubenswrapper[4955]: E1128 06:41:24.573308 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44\": container with ID starting with 0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44 not found: ID does not exist" containerID="0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.573393 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44"} err="failed to get container status \"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44\": rpc error: code = NotFound desc = could not find container \"0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44\": container with ID starting with 0801e75a1010fdff2cc3f8db1e9f1285409b373da144dccfec96c4cec6ed0c44 not found: ID does not exist" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.573436 4955 scope.go:117] "RemoveContainer" containerID="9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e" Nov 28 06:41:24 crc kubenswrapper[4955]: E1128 06:41:24.573939 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e\": container with ID starting with 9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e not found: ID does not exist" containerID="9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.573977 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e"} err="failed to get container status \"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e\": rpc error: code = NotFound desc = could not find container \"9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e\": container with ID starting with 9bc77810bd27778b4540f34aa5fc42c6320af1d22504156a274d7427284e5c7e not found: ID does not exist" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.640882 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:41:24 crc kubenswrapper[4955]: W1128 06:41:24.644363 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc4d1535_01f8_4a19_8381_fe1265f92331.slice/crio-cf0b53250fa1a369fad9b060b13cb292a7bdeb6b3e6bec4e3cd8da9ba3806bcb WatchSource:0}: Error finding container cf0b53250fa1a369fad9b060b13cb292a7bdeb6b3e6bec4e3cd8da9ba3806bcb: Status 404 returned error can't find the container with id cf0b53250fa1a369fad9b060b13cb292a7bdeb6b3e6bec4e3cd8da9ba3806bcb Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.668666 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.668712 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.668770 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.668822 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mzt7\" (UniqueName: \"kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.770853 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.770950 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.770990 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mzt7\" (UniqueName: \"kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.771094 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.772357 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.774104 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.777087 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.790311 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mzt7\" (UniqueName: \"kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7\") pod \"nova-api-0\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " pod="openstack/nova-api-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.842966 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.843008 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:41:24 crc kubenswrapper[4955]: I1128 06:41:24.874384 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.373755 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:25 crc kubenswrapper[4955]: W1128 06:41:25.376703 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c36dde0_1b33_4d9d_b043_59c33196b7be.slice/crio-0b178504039f8819a7858a781b725583a77cd17e56d2a5df35e1410f320e86cb WatchSource:0}: Error finding container 0b178504039f8819a7858a781b725583a77cd17e56d2a5df35e1410f320e86cb: Status 404 returned error can't find the container with id 0b178504039f8819a7858a781b725583a77cd17e56d2a5df35e1410f320e86cb Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.485955 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerStarted","Data":"0b178504039f8819a7858a781b725583a77cd17e56d2a5df35e1410f320e86cb"} Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.491708 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d1535-01f8-4a19-8381-fe1265f92331","Type":"ContainerStarted","Data":"32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a"} Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.491788 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d1535-01f8-4a19-8381-fe1265f92331","Type":"ContainerStarted","Data":"cf0b53250fa1a369fad9b060b13cb292a7bdeb6b3e6bec4e3cd8da9ba3806bcb"} Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.526689 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.526663415 podStartE2EDuration="2.526663415s" podCreationTimestamp="2025-11-28 06:41:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:25.509698243 +0000 UTC m=+1208.098953833" watchObservedRunningTime="2025-11-28 06:41:25.526663415 +0000 UTC m=+1208.115919015" Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.717628 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fbfd74e-5e89-4167-917b-f66827f7d0de" path="/var/lib/kubelet/pods/2fbfd74e-5e89-4167-917b-f66827f7d0de/volumes" Nov 28 06:41:25 crc kubenswrapper[4955]: I1128 06:41:25.718226 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeac2ca0-d495-4644-a7d9-8a57dfd01cb8" path="/var/lib/kubelet/pods/eeac2ca0-d495-4644-a7d9-8a57dfd01cb8/volumes" Nov 28 06:41:26 crc kubenswrapper[4955]: I1128 06:41:26.506023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerStarted","Data":"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec"} Nov 28 06:41:26 crc kubenswrapper[4955]: I1128 06:41:26.506084 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerStarted","Data":"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3"} Nov 28 06:41:26 crc kubenswrapper[4955]: I1128 06:41:26.543247 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.543226611 podStartE2EDuration="2.543226611s" podCreationTimestamp="2025-11-28 06:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:26.533559053 +0000 UTC m=+1209.122814623" watchObservedRunningTime="2025-11-28 06:41:26.543226611 +0000 UTC m=+1209.132482191" Nov 28 06:41:27 crc kubenswrapper[4955]: I1128 06:41:27.480203 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:27 crc kubenswrapper[4955]: I1128 06:41:27.480644 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" containerName="kube-state-metrics" containerID="cri-o://507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3" gracePeriod=30 Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.073806 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.158224 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vhck\" (UniqueName: \"kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck\") pod \"49e9a1c0-0f8a-43ad-8180-6ecb191c5850\" (UID: \"49e9a1c0-0f8a-43ad-8180-6ecb191c5850\") " Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.165840 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck" (OuterVolumeSpecName: "kube-api-access-2vhck") pod "49e9a1c0-0f8a-43ad-8180-6ecb191c5850" (UID: "49e9a1c0-0f8a-43ad-8180-6ecb191c5850"). InnerVolumeSpecName "kube-api-access-2vhck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.267286 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vhck\" (UniqueName: \"kubernetes.io/projected/49e9a1c0-0f8a-43ad-8180-6ecb191c5850-kube-api-access-2vhck\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.525164 4955 generic.go:334] "Generic (PLEG): container finished" podID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" containerID="507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3" exitCode=2 Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.525209 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49e9a1c0-0f8a-43ad-8180-6ecb191c5850","Type":"ContainerDied","Data":"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3"} Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.525236 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"49e9a1c0-0f8a-43ad-8180-6ecb191c5850","Type":"ContainerDied","Data":"12109ad1b919de4391a41a30680ae8ca031b80ca80c54d2638b33a94efd9ed31"} Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.525245 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.525253 4955 scope.go:117] "RemoveContainer" containerID="507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.555787 4955 scope.go:117] "RemoveContainer" containerID="507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3" Nov 28 06:41:28 crc kubenswrapper[4955]: E1128 06:41:28.556411 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3\": container with ID starting with 507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3 not found: ID does not exist" containerID="507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.556466 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3"} err="failed to get container status \"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3\": rpc error: code = NotFound desc = could not find container \"507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3\": container with ID starting with 507da0e16a7ca4430afa0169a3b526525ae4fb877a4d731db42296eca66789d3 not found: ID does not exist" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.565542 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.579265 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.595529 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:28 crc kubenswrapper[4955]: E1128 06:41:28.595969 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" containerName="kube-state-metrics" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.605880 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" containerName="kube-state-metrics" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.606244 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" containerName="kube-state-metrics" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.606830 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.606915 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.609082 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.609458 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.675527 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.675590 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5jfs\" (UniqueName: \"kubernetes.io/projected/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-api-access-t5jfs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.675678 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.675713 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.776908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.776960 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5jfs\" (UniqueName: \"kubernetes.io/projected/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-api-access-t5jfs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.777003 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.777039 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.790392 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.790432 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.790986 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.800269 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5jfs\" (UniqueName: \"kubernetes.io/projected/ab74c890-3754-4fdb-84ab-0884ae7ca237-kube-api-access-t5jfs\") pod \"kube-state-metrics-0\" (UID: \"ab74c890-3754-4fdb-84ab-0884ae7ca237\") " pod="openstack/kube-state-metrics-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.824398 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 06:41:28 crc kubenswrapper[4955]: I1128 06:41:28.937556 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.150324 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.395633 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.400235 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="proxy-httpd" containerID="cri-o://db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5" gracePeriod=30 Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.400818 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="sg-core" containerID="cri-o://d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2" gracePeriod=30 Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.400876 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-notification-agent" containerID="cri-o://73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9" gracePeriod=30 Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.400985 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-central-agent" containerID="cri-o://19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b" gracePeriod=30 Nov 28 06:41:29 crc kubenswrapper[4955]: W1128 06:41:29.450542 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab74c890_3754_4fdb_84ab_0884ae7ca237.slice/crio-c303bed59fd7dfca19ac2089545a14de5366348e2fcd2bfba67bd8a66a69f5ce WatchSource:0}: Error finding container c303bed59fd7dfca19ac2089545a14de5366348e2fcd2bfba67bd8a66a69f5ce: Status 404 returned error can't find the container with id c303bed59fd7dfca19ac2089545a14de5366348e2fcd2bfba67bd8a66a69f5ce Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.467463 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.540713 4955 generic.go:334] "Generic (PLEG): container finished" podID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerID="db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5" exitCode=0 Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.540786 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerDied","Data":"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5"} Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.542234 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab74c890-3754-4fdb-84ab-0884ae7ca237","Type":"ContainerStarted","Data":"c303bed59fd7dfca19ac2089545a14de5366348e2fcd2bfba67bd8a66a69f5ce"} Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.719900 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e9a1c0-0f8a-43ad-8180-6ecb191c5850" path="/var/lib/kubelet/pods/49e9a1c0-0f8a-43ad-8180-6ecb191c5850/volumes" Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.842692 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 06:41:29 crc kubenswrapper[4955]: I1128 06:41:29.842747 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.551535 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ab74c890-3754-4fdb-84ab-0884ae7ca237","Type":"ContainerStarted","Data":"62180d2e1649da6a40592f33c4f44a32d3a3959d4926bf296be61ade7d1f1b52"} Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.551672 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.554357 4955 generic.go:334] "Generic (PLEG): container finished" podID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerID="d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2" exitCode=2 Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.554389 4955 generic.go:334] "Generic (PLEG): container finished" podID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerID="19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b" exitCode=0 Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.554430 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerDied","Data":"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2"} Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.554454 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerDied","Data":"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b"} Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.578302 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.193789603 podStartE2EDuration="2.578285658s" podCreationTimestamp="2025-11-28 06:41:28 +0000 UTC" firstStartedPulling="2025-11-28 06:41:29.4547005 +0000 UTC m=+1212.043956070" lastFinishedPulling="2025-11-28 06:41:29.839196545 +0000 UTC m=+1212.428452125" observedRunningTime="2025-11-28 06:41:30.568402784 +0000 UTC m=+1213.157658354" watchObservedRunningTime="2025-11-28 06:41:30.578285658 +0000 UTC m=+1213.167541218" Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.857713 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:30 crc kubenswrapper[4955]: I1128 06:41:30.857730 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.370478 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431527 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431573 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431612 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431641 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431689 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431777 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.431803 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnqbz\" (UniqueName: \"kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz\") pod \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\" (UID: \"4d266d08-5094-4cfe-8adb-720a7dafcfdd\") " Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.433126 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.440425 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.452140 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts" (OuterVolumeSpecName: "scripts") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.453068 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz" (OuterVolumeSpecName: "kube-api-access-vnqbz") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "kube-api-access-vnqbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.488015 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.538655 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.538688 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnqbz\" (UniqueName: \"kubernetes.io/projected/4d266d08-5094-4cfe-8adb-720a7dafcfdd-kube-api-access-vnqbz\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.538698 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.538708 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.538716 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4d266d08-5094-4cfe-8adb-720a7dafcfdd-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.541689 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.557496 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data" (OuterVolumeSpecName: "config-data") pod "4d266d08-5094-4cfe-8adb-720a7dafcfdd" (UID: "4d266d08-5094-4cfe-8adb-720a7dafcfdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.568328 4955 generic.go:334] "Generic (PLEG): container finished" podID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerID="73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9" exitCode=0 Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.568999 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.569361 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerDied","Data":"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9"} Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.569402 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4d266d08-5094-4cfe-8adb-720a7dafcfdd","Type":"ContainerDied","Data":"b81e2e7359bc9113fe04f1f64f418a6710cc6c7baf15fceddcced95795e91ae6"} Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.569423 4955 scope.go:117] "RemoveContainer" containerID="db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.596400 4955 scope.go:117] "RemoveContainer" containerID="d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.600741 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.609023 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.629579 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.630045 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-notification-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630070 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-notification-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.630094 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-central-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630103 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-central-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.630114 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="proxy-httpd" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630122 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="proxy-httpd" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.630140 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="sg-core" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630147 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="sg-core" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630342 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-notification-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630367 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="proxy-httpd" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630382 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="ceilometer-central-agent" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.630405 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" containerName="sg-core" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.632894 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.636094 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.636274 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.636297 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.637595 4955 scope.go:117] "RemoveContainer" containerID="73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.640151 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.640172 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d266d08-5094-4cfe-8adb-720a7dafcfdd-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.646886 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.661645 4955 scope.go:117] "RemoveContainer" containerID="19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.689613 4955 scope.go:117] "RemoveContainer" containerID="db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.690064 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5\": container with ID starting with db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5 not found: ID does not exist" containerID="db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.690095 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5"} err="failed to get container status \"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5\": rpc error: code = NotFound desc = could not find container \"db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5\": container with ID starting with db002e5102cd4d3da7fce65dba37315f33f39bcfbba883f50e16ccc7a853e4b5 not found: ID does not exist" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.690116 4955 scope.go:117] "RemoveContainer" containerID="d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.690709 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2\": container with ID starting with d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2 not found: ID does not exist" containerID="d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.690732 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2"} err="failed to get container status \"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2\": rpc error: code = NotFound desc = could not find container \"d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2\": container with ID starting with d08a0811186690ac98930a64194fa022f1967d7b93c16d55be03809f91d7f5d2 not found: ID does not exist" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.691020 4955 scope.go:117] "RemoveContainer" containerID="73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.691426 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9\": container with ID starting with 73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9 not found: ID does not exist" containerID="73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.691465 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9"} err="failed to get container status \"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9\": rpc error: code = NotFound desc = could not find container \"73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9\": container with ID starting with 73f687b98b3a707c8b4bacdd68763821d12326d144bf6f73596c0871643b3db9 not found: ID does not exist" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.691491 4955 scope.go:117] "RemoveContainer" containerID="19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b" Nov 28 06:41:31 crc kubenswrapper[4955]: E1128 06:41:31.694619 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b\": container with ID starting with 19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b not found: ID does not exist" containerID="19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.694653 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b"} err="failed to get container status \"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b\": rpc error: code = NotFound desc = could not find container \"19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b\": container with ID starting with 19178fa73d011e5ca9afc677d84e48af05159a223e1579819f1b5fc39729609b not found: ID does not exist" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.715451 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d266d08-5094-4cfe-8adb-720a7dafcfdd" path="/var/lib/kubelet/pods/4d266d08-5094-4cfe-8adb-720a7dafcfdd/volumes" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742086 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742141 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742215 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742272 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtzvj\" (UniqueName: \"kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742290 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742329 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742351 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.742425 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.844251 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.844594 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtzvj\" (UniqueName: \"kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.844623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.844645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.845316 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.845377 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.845730 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.845869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.845899 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.846747 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.849338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.849668 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.850790 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.851736 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.852151 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.863522 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtzvj\" (UniqueName: \"kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj\") pod \"ceilometer-0\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " pod="openstack/ceilometer-0" Nov 28 06:41:31 crc kubenswrapper[4955]: I1128 06:41:31.962430 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:32 crc kubenswrapper[4955]: I1128 06:41:32.457173 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:32 crc kubenswrapper[4955]: I1128 06:41:32.581873 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerStarted","Data":"9e92ac87aba04506021358146699afbae0993187194902bd98190b1da1d87adb"} Nov 28 06:41:33 crc kubenswrapper[4955]: I1128 06:41:33.593774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerStarted","Data":"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b"} Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.150888 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.175149 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.607402 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerStarted","Data":"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01"} Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.646977 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.874662 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:41:34 crc kubenswrapper[4955]: I1128 06:41:34.875365 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:41:35 crc kubenswrapper[4955]: I1128 06:41:35.617142 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerStarted","Data":"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e"} Nov 28 06:41:35 crc kubenswrapper[4955]: I1128 06:41:35.957750 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:35 crc kubenswrapper[4955]: I1128 06:41:35.957843 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.195:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 06:41:36 crc kubenswrapper[4955]: I1128 06:41:36.629027 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerStarted","Data":"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3"} Nov 28 06:41:36 crc kubenswrapper[4955]: I1128 06:41:36.629457 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:41:36 crc kubenswrapper[4955]: I1128 06:41:36.657252 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.157909673 podStartE2EDuration="5.657229593s" podCreationTimestamp="2025-11-28 06:41:31 +0000 UTC" firstStartedPulling="2025-11-28 06:41:32.462466863 +0000 UTC m=+1215.051722433" lastFinishedPulling="2025-11-28 06:41:35.961786783 +0000 UTC m=+1218.551042353" observedRunningTime="2025-11-28 06:41:36.648042679 +0000 UTC m=+1219.237298269" watchObservedRunningTime="2025-11-28 06:41:36.657229593 +0000 UTC m=+1219.246485173" Nov 28 06:41:38 crc kubenswrapper[4955]: I1128 06:41:38.949006 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 06:41:39 crc kubenswrapper[4955]: I1128 06:41:39.849865 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 06:41:39 crc kubenswrapper[4955]: I1128 06:41:39.853889 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 06:41:39 crc kubenswrapper[4955]: I1128 06:41:39.855484 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 06:41:40 crc kubenswrapper[4955]: I1128 06:41:40.684735 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.624570 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.651441 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle\") pod \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.651571 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc2s7\" (UniqueName: \"kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7\") pod \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.651770 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data\") pod \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\" (UID: \"90978c6f-38fd-4b9e-83e9-18a3082fe2fa\") " Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.657303 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7" (OuterVolumeSpecName: "kube-api-access-hc2s7") pod "90978c6f-38fd-4b9e-83e9-18a3082fe2fa" (UID: "90978c6f-38fd-4b9e-83e9-18a3082fe2fa"). InnerVolumeSpecName "kube-api-access-hc2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.680491 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data" (OuterVolumeSpecName: "config-data") pod "90978c6f-38fd-4b9e-83e9-18a3082fe2fa" (UID: "90978c6f-38fd-4b9e-83e9-18a3082fe2fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.684748 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90978c6f-38fd-4b9e-83e9-18a3082fe2fa" (UID: "90978c6f-38fd-4b9e-83e9-18a3082fe2fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.708086 4955 generic.go:334] "Generic (PLEG): container finished" podID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" containerID="c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1" exitCode=137 Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.708163 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.708168 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90978c6f-38fd-4b9e-83e9-18a3082fe2fa","Type":"ContainerDied","Data":"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1"} Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.708216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"90978c6f-38fd-4b9e-83e9-18a3082fe2fa","Type":"ContainerDied","Data":"396d4410040ac5045c38732fb346427852b9cb3e11bc7f2444eb5b580102785c"} Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.708236 4955 scope.go:117] "RemoveContainer" containerID="c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.754616 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc2s7\" (UniqueName: \"kubernetes.io/projected/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-kube-api-access-hc2s7\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.755289 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.755337 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90978c6f-38fd-4b9e-83e9-18a3082fe2fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.784968 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.789274 4955 scope.go:117] "RemoveContainer" containerID="c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1" Nov 28 06:41:42 crc kubenswrapper[4955]: E1128 06:41:42.789762 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1\": container with ID starting with c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1 not found: ID does not exist" containerID="c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.789837 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1"} err="failed to get container status \"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1\": rpc error: code = NotFound desc = could not find container \"c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1\": container with ID starting with c55302ea448640378d1f5565ecd70cf73227bb9481c8f9f9f4fd6d54715c4fc1 not found: ID does not exist" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.798692 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.806094 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:42 crc kubenswrapper[4955]: E1128 06:41:42.806444 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.806457 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.806654 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.807214 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.814864 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.820774 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.821123 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.825282 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.856231 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.856293 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.856368 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.856416 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqz5p\" (UniqueName: \"kubernetes.io/projected/fe4d7165-1010-42a5-a707-257169437be1-kube-api-access-fqz5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.856465 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.957869 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.957957 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.958024 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.958069 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqz5p\" (UniqueName: \"kubernetes.io/projected/fe4d7165-1010-42a5-a707-257169437be1-kube-api-access-fqz5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.958112 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.961489 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.961617 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.962979 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.964366 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe4d7165-1010-42a5-a707-257169437be1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:42 crc kubenswrapper[4955]: I1128 06:41:42.978072 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqz5p\" (UniqueName: \"kubernetes.io/projected/fe4d7165-1010-42a5-a707-257169437be1-kube-api-access-fqz5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"fe4d7165-1010-42a5-a707-257169437be1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:43 crc kubenswrapper[4955]: I1128 06:41:43.141010 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:43 crc kubenswrapper[4955]: I1128 06:41:43.614619 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 06:41:43 crc kubenswrapper[4955]: I1128 06:41:43.714955 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90978c6f-38fd-4b9e-83e9-18a3082fe2fa" path="/var/lib/kubelet/pods/90978c6f-38fd-4b9e-83e9-18a3082fe2fa/volumes" Nov 28 06:41:43 crc kubenswrapper[4955]: I1128 06:41:43.719333 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe4d7165-1010-42a5-a707-257169437be1","Type":"ContainerStarted","Data":"47b433e059c65f81a17efa6a610dc6676339ee8d4e6499d4587558e593780869"} Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.736865 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fe4d7165-1010-42a5-a707-257169437be1","Type":"ContainerStarted","Data":"7d59b870d5010feb68d1d4d5d016d96468d150db58f7abca330bee8d3604b052"} Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.768546 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.768527598 podStartE2EDuration="2.768527598s" podCreationTimestamp="2025-11-28 06:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:44.759921771 +0000 UTC m=+1227.349177401" watchObservedRunningTime="2025-11-28 06:41:44.768527598 +0000 UTC m=+1227.357783168" Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.879145 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.880608 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.880904 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 06:41:44 crc kubenswrapper[4955]: I1128 06:41:44.883326 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 06:41:45 crc kubenswrapper[4955]: I1128 06:41:45.744718 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 06:41:45 crc kubenswrapper[4955]: I1128 06:41:45.749919 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 06:41:45 crc kubenswrapper[4955]: I1128 06:41:45.955965 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:41:45 crc kubenswrapper[4955]: I1128 06:41:45.957567 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:45 crc kubenswrapper[4955]: I1128 06:41:45.974989 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.124622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzmx\" (UniqueName: \"kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.124772 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.124861 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.124906 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.124928 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.125012 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.226932 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.226966 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.227025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.227114 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzmx\" (UniqueName: \"kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.227174 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.227216 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.227981 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.228069 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.228091 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.228173 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.228213 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.255394 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzmx\" (UniqueName: \"kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx\") pod \"dnsmasq-dns-59cf4bdb65-qwl6p\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.297477 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:46 crc kubenswrapper[4955]: I1128 06:41:46.764277 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:41:46 crc kubenswrapper[4955]: W1128 06:41:46.766496 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf00d334_73f9_4bec_9dd6_99e06f16e4bc.slice/crio-67a20940f6cb8e5527595ff3837a0605759bd6c784c8994961b11288ac90b0c0 WatchSource:0}: Error finding container 67a20940f6cb8e5527595ff3837a0605759bd6c784c8994961b11288ac90b0c0: Status 404 returned error can't find the container with id 67a20940f6cb8e5527595ff3837a0605759bd6c784c8994961b11288ac90b0c0 Nov 28 06:41:47 crc kubenswrapper[4955]: I1128 06:41:47.774801 4955 generic.go:334] "Generic (PLEG): container finished" podID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerID="71fe32ff4ee2ec6be007e8cb7a45902df6f797346aed76543555a61eac6bf738" exitCode=0 Nov 28 06:41:47 crc kubenswrapper[4955]: I1128 06:41:47.774925 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" event={"ID":"df00d334-73f9-4bec-9dd6-99e06f16e4bc","Type":"ContainerDied","Data":"71fe32ff4ee2ec6be007e8cb7a45902df6f797346aed76543555a61eac6bf738"} Nov 28 06:41:47 crc kubenswrapper[4955]: I1128 06:41:47.775490 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" event={"ID":"df00d334-73f9-4bec-9dd6-99e06f16e4bc","Type":"ContainerStarted","Data":"67a20940f6cb8e5527595ff3837a0605759bd6c784c8994961b11288ac90b0c0"} Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.122779 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.123432 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-central-agent" containerID="cri-o://109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.123485 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="proxy-httpd" containerID="cri-o://40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.123557 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-notification-agent" containerID="cri-o://ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.123562 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="sg-core" containerID="cri-o://2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.130938 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.197:3000/\": EOF" Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.142183 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.506297 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.786985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" event={"ID":"df00d334-73f9-4bec-9dd6-99e06f16e4bc","Type":"ContainerStarted","Data":"af2c3a34ac482df3be679ac7702db3fdf0e67ea2d75e5ebbcf39fcb6bcdd8125"} Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.787139 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.789909 4955 generic.go:334] "Generic (PLEG): container finished" podID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerID="40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3" exitCode=0 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.789941 4955 generic.go:334] "Generic (PLEG): container finished" podID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerID="2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e" exitCode=2 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.789950 4955 generic.go:334] "Generic (PLEG): container finished" podID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerID="109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b" exitCode=0 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.790003 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerDied","Data":"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3"} Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.790039 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerDied","Data":"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e"} Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.790054 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerDied","Data":"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b"} Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.790097 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-log" containerID="cri-o://be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.790128 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-api" containerID="cri-o://6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec" gracePeriod=30 Nov 28 06:41:48 crc kubenswrapper[4955]: I1128 06:41:48.818943 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" podStartSLOduration=3.818902104 podStartE2EDuration="3.818902104s" podCreationTimestamp="2025-11-28 06:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:48.809252507 +0000 UTC m=+1231.398508097" watchObservedRunningTime="2025-11-28 06:41:48.818902104 +0000 UTC m=+1231.408157674" Nov 28 06:41:48 crc kubenswrapper[4955]: E1128 06:41:48.877402 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c36dde0_1b33_4d9d_b043_59c33196b7be.slice/crio-be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3.scope\": RecentStats: unable to find data in memory cache]" Nov 28 06:41:49 crc kubenswrapper[4955]: I1128 06:41:49.805786 4955 generic.go:334] "Generic (PLEG): container finished" podID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerID="be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3" exitCode=143 Nov 28 06:41:49 crc kubenswrapper[4955]: I1128 06:41:49.805878 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerDied","Data":"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3"} Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.419291 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505138 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505191 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505254 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505315 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505400 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtzvj\" (UniqueName: \"kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505461 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505485 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.505532 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd\") pod \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\" (UID: \"b8eee21f-c7f7-4200-89b3-50b7f57f3d79\") " Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.506669 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.525995 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.577672 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts" (OuterVolumeSpecName: "scripts") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.583709 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj" (OuterVolumeSpecName: "kube-api-access-dtzvj") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "kube-api-access-dtzvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.607128 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtzvj\" (UniqueName: \"kubernetes.io/projected/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-kube-api-access-dtzvj\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.607349 4955 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.607429 4955 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.607540 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.633708 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.681445 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.688079 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data" (OuterVolumeSpecName: "config-data") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.709263 4955 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.709475 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.709569 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.712375 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8eee21f-c7f7-4200-89b3-50b7f57f3d79" (UID: "b8eee21f-c7f7-4200-89b3-50b7f57f3d79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.811774 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8eee21f-c7f7-4200-89b3-50b7f57f3d79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.816903 4955 generic.go:334] "Generic (PLEG): container finished" podID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerID="ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01" exitCode=0 Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.816957 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.816984 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerDied","Data":"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01"} Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.817215 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b8eee21f-c7f7-4200-89b3-50b7f57f3d79","Type":"ContainerDied","Data":"9e92ac87aba04506021358146699afbae0993187194902bd98190b1da1d87adb"} Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.817237 4955 scope.go:117] "RemoveContainer" containerID="40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.843750 4955 scope.go:117] "RemoveContainer" containerID="2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.861577 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.877871 4955 scope.go:117] "RemoveContainer" containerID="ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.888315 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.899771 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.900131 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-central-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900148 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-central-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.900165 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-notification-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900171 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-notification-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.900192 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="sg-core" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900198 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="sg-core" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.900208 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="proxy-httpd" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900213 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="proxy-httpd" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900367 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="sg-core" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900381 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-central-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900390 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="proxy-httpd" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.900399 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" containerName="ceilometer-notification-agent" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.902083 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.916388 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.916594 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.916716 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.920058 4955 scope.go:117] "RemoveContainer" containerID="109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.928998 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.943603 4955 scope.go:117] "RemoveContainer" containerID="40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.944050 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3\": container with ID starting with 40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3 not found: ID does not exist" containerID="40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944105 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3"} err="failed to get container status \"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3\": rpc error: code = NotFound desc = could not find container \"40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3\": container with ID starting with 40db5db9a8f41225064491847c87dc457c8f3f5da04815c8f664fdc91f4831f3 not found: ID does not exist" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944137 4955 scope.go:117] "RemoveContainer" containerID="2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.944439 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e\": container with ID starting with 2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e not found: ID does not exist" containerID="2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944480 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e"} err="failed to get container status \"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e\": rpc error: code = NotFound desc = could not find container \"2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e\": container with ID starting with 2a40a56a277717b790988c9660303ffdddb5f86186f9c9cf507d4ee2b26f205e not found: ID does not exist" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944523 4955 scope.go:117] "RemoveContainer" containerID="ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.944828 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01\": container with ID starting with ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01 not found: ID does not exist" containerID="ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944858 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01"} err="failed to get container status \"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01\": rpc error: code = NotFound desc = could not find container \"ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01\": container with ID starting with ad5c159ab9b4eb22ca46d6c55fab67aed6b6a52427a8d12b7925f7324b132c01 not found: ID does not exist" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.944880 4955 scope.go:117] "RemoveContainer" containerID="109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b" Nov 28 06:41:50 crc kubenswrapper[4955]: E1128 06:41:50.945201 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b\": container with ID starting with 109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b not found: ID does not exist" containerID="109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b" Nov 28 06:41:50 crc kubenswrapper[4955]: I1128 06:41:50.945236 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b"} err="failed to get container status \"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b\": rpc error: code = NotFound desc = could not find container \"109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b\": container with ID starting with 109cbcebebff6934fb812dbbc1b11311a4d48003aaf09ff367f331afd6d3538b not found: ID does not exist" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018350 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7fkc\" (UniqueName: \"kubernetes.io/projected/cee6f72e-fd30-4482-881a-4afb4c003099-kube-api-access-b7fkc\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018457 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-log-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018568 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-scripts\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018596 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018648 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018670 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-config-data\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018705 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-run-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.018748 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120648 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-scripts\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120697 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120737 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120755 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-config-data\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120782 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-run-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120810 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120826 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7fkc\" (UniqueName: \"kubernetes.io/projected/cee6f72e-fd30-4482-881a-4afb4c003099-kube-api-access-b7fkc\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.120873 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-log-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.121302 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-log-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.122798 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cee6f72e-fd30-4482-881a-4afb4c003099-run-httpd\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.125379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.125702 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-config-data\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.126426 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-scripts\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.128360 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.131268 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cee6f72e-fd30-4482-881a-4afb4c003099-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.139553 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7fkc\" (UniqueName: \"kubernetes.io/projected/cee6f72e-fd30-4482-881a-4afb4c003099-kube-api-access-b7fkc\") pod \"ceilometer-0\" (UID: \"cee6f72e-fd30-4482-881a-4afb4c003099\") " pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.233354 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.715948 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8eee21f-c7f7-4200-89b3-50b7f57f3d79" path="/var/lib/kubelet/pods/b8eee21f-c7f7-4200-89b3-50b7f57f3d79/volumes" Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.732946 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 06:41:51 crc kubenswrapper[4955]: W1128 06:41:51.734484 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcee6f72e_fd30_4482_881a_4afb4c003099.slice/crio-fb5f032a7762a90d78efe8a9608189181d79b235a7f0696085bd9b78cb37c7a8 WatchSource:0}: Error finding container fb5f032a7762a90d78efe8a9608189181d79b235a7f0696085bd9b78cb37c7a8: Status 404 returned error can't find the container with id fb5f032a7762a90d78efe8a9608189181d79b235a7f0696085bd9b78cb37c7a8 Nov 28 06:41:51 crc kubenswrapper[4955]: I1128 06:41:51.836757 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cee6f72e-fd30-4482-881a-4afb4c003099","Type":"ContainerStarted","Data":"fb5f032a7762a90d78efe8a9608189181d79b235a7f0696085bd9b78cb37c7a8"} Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.370540 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.550276 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs\") pod \"2c36dde0-1b33-4d9d-b043-59c33196b7be\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.550542 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data\") pod \"2c36dde0-1b33-4d9d-b043-59c33196b7be\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.550575 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mzt7\" (UniqueName: \"kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7\") pod \"2c36dde0-1b33-4d9d-b043-59c33196b7be\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.550723 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle\") pod \"2c36dde0-1b33-4d9d-b043-59c33196b7be\" (UID: \"2c36dde0-1b33-4d9d-b043-59c33196b7be\") " Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.550969 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs" (OuterVolumeSpecName: "logs") pod "2c36dde0-1b33-4d9d-b043-59c33196b7be" (UID: "2c36dde0-1b33-4d9d-b043-59c33196b7be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.551272 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c36dde0-1b33-4d9d-b043-59c33196b7be-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.556999 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7" (OuterVolumeSpecName: "kube-api-access-9mzt7") pod "2c36dde0-1b33-4d9d-b043-59c33196b7be" (UID: "2c36dde0-1b33-4d9d-b043-59c33196b7be"). InnerVolumeSpecName "kube-api-access-9mzt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.593257 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c36dde0-1b33-4d9d-b043-59c33196b7be" (UID: "2c36dde0-1b33-4d9d-b043-59c33196b7be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.610869 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data" (OuterVolumeSpecName: "config-data") pod "2c36dde0-1b33-4d9d-b043-59c33196b7be" (UID: "2c36dde0-1b33-4d9d-b043-59c33196b7be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.655087 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.655129 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c36dde0-1b33-4d9d-b043-59c33196b7be-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.655142 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mzt7\" (UniqueName: \"kubernetes.io/projected/2c36dde0-1b33-4d9d-b043-59c33196b7be-kube-api-access-9mzt7\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.849880 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cee6f72e-fd30-4482-881a-4afb4c003099","Type":"ContainerStarted","Data":"8b677f098bf343a31ee7b5408c8a24e0b34414aadee220482f6999fabe3cfdd4"} Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.852526 4955 generic.go:334] "Generic (PLEG): container finished" podID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerID="6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec" exitCode=0 Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.852571 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerDied","Data":"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec"} Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.852599 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2c36dde0-1b33-4d9d-b043-59c33196b7be","Type":"ContainerDied","Data":"0b178504039f8819a7858a781b725583a77cd17e56d2a5df35e1410f320e86cb"} Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.852618 4955 scope.go:117] "RemoveContainer" containerID="6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.852707 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.883914 4955 scope.go:117] "RemoveContainer" containerID="be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.900186 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.911030 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.915633 4955 scope.go:117] "RemoveContainer" containerID="6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec" Nov 28 06:41:52 crc kubenswrapper[4955]: E1128 06:41:52.916100 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec\": container with ID starting with 6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec not found: ID does not exist" containerID="6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.916162 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec"} err="failed to get container status \"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec\": rpc error: code = NotFound desc = could not find container \"6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec\": container with ID starting with 6e54c30e21d54fea2b8ac2277e6a9318c6f8e1085a6d0de5492cd5721f3c48ec not found: ID does not exist" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.916190 4955 scope.go:117] "RemoveContainer" containerID="be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3" Nov 28 06:41:52 crc kubenswrapper[4955]: E1128 06:41:52.916435 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3\": container with ID starting with be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3 not found: ID does not exist" containerID="be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.916460 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3"} err="failed to get container status \"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3\": rpc error: code = NotFound desc = could not find container \"be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3\": container with ID starting with be42da56c9ed87ffd5a34e3eb253a7db7f3cc9ca603b2c4ffe6459f789408ba3 not found: ID does not exist" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.920799 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:52 crc kubenswrapper[4955]: E1128 06:41:52.921246 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-api" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.921267 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-api" Nov 28 06:41:52 crc kubenswrapper[4955]: E1128 06:41:52.921305 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-log" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.921313 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-log" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.921617 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-log" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.921641 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" containerName="nova-api-api" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.923154 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.925225 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.925804 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.926213 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 06:41:52 crc kubenswrapper[4955]: I1128 06:41:52.944338 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063227 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063687 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063732 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063759 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063838 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.063914 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnng\" (UniqueName: \"kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.142263 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.165947 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.165996 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.166019 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.166073 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.166118 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrnng\" (UniqueName: \"kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.166168 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.167045 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.170065 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.170590 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.171257 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.173388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.173555 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.188119 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrnng\" (UniqueName: \"kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng\") pod \"nova-api-0\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.265985 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.716146 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c36dde0-1b33-4d9d-b043-59c33196b7be" path="/var/lib/kubelet/pods/2c36dde0-1b33-4d9d-b043-59c33196b7be/volumes" Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.753283 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:41:53 crc kubenswrapper[4955]: W1128 06:41:53.756035 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8187545b_7dae_4b82_8ba0_06efecbef970.slice/crio-a83b3f3c9a1c67a75b6c0e9b18c533b480247ac9e0be19944f85390298098ed3 WatchSource:0}: Error finding container a83b3f3c9a1c67a75b6c0e9b18c533b480247ac9e0be19944f85390298098ed3: Status 404 returned error can't find the container with id a83b3f3c9a1c67a75b6c0e9b18c533b480247ac9e0be19944f85390298098ed3 Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.865540 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerStarted","Data":"a83b3f3c9a1c67a75b6c0e9b18c533b480247ac9e0be19944f85390298098ed3"} Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.870910 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cee6f72e-fd30-4482-881a-4afb4c003099","Type":"ContainerStarted","Data":"a940cb1bd567279db553d565ebf903529b5d8ee5ce4f53dfd2abe23ccf2c19e6"} Nov 28 06:41:53 crc kubenswrapper[4955]: I1128 06:41:53.889489 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.093080 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sstdc"] Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.094426 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.097809 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.098043 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.104710 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sstdc"] Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.188598 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lmtf\" (UniqueName: \"kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.188662 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.188690 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.188715 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.290051 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lmtf\" (UniqueName: \"kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.290342 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.290377 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.290409 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.295107 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.295345 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.295735 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.307057 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lmtf\" (UniqueName: \"kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf\") pod \"nova-cell1-cell-mapping-sstdc\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.481838 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.885633 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cee6f72e-fd30-4482-881a-4afb4c003099","Type":"ContainerStarted","Data":"e087bc10da99775174fc4ac1d550075f6326a50be69d9cdc296d184349d16588"} Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.889386 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerStarted","Data":"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855"} Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.889440 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerStarted","Data":"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7"} Nov 28 06:41:54 crc kubenswrapper[4955]: I1128 06:41:54.908739 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9087217020000002 podStartE2EDuration="2.908721702s" podCreationTimestamp="2025-11-28 06:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:54.906898191 +0000 UTC m=+1237.496153771" watchObservedRunningTime="2025-11-28 06:41:54.908721702 +0000 UTC m=+1237.497977272" Nov 28 06:41:55 crc kubenswrapper[4955]: W1128 06:41:55.074033 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc75b27e_406c_4d68_868c_75c33da792ab.slice/crio-60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69 WatchSource:0}: Error finding container 60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69: Status 404 returned error can't find the container with id 60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69 Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.074296 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sstdc"] Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.899978 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cee6f72e-fd30-4482-881a-4afb4c003099","Type":"ContainerStarted","Data":"6576640820e16e020e29b893e36b024adc1f9a200cd09a2a4768f4bb25a2a25f"} Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.901619 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.904273 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sstdc" event={"ID":"dc75b27e-406c-4d68-868c-75c33da792ab","Type":"ContainerStarted","Data":"c26f3901b7a027d70391681e78315809639478f84c71b57747b4de3caa2be945"} Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.904309 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sstdc" event={"ID":"dc75b27e-406c-4d68-868c-75c33da792ab","Type":"ContainerStarted","Data":"60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69"} Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.929493 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.196416979 podStartE2EDuration="5.929470181s" podCreationTimestamp="2025-11-28 06:41:50 +0000 UTC" firstStartedPulling="2025-11-28 06:41:51.736871018 +0000 UTC m=+1234.326126588" lastFinishedPulling="2025-11-28 06:41:55.46992422 +0000 UTC m=+1238.059179790" observedRunningTime="2025-11-28 06:41:55.920260551 +0000 UTC m=+1238.509516141" watchObservedRunningTime="2025-11-28 06:41:55.929470181 +0000 UTC m=+1238.518725761" Nov 28 06:41:55 crc kubenswrapper[4955]: I1128 06:41:55.948408 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sstdc" podStartSLOduration=1.9483845039999999 podStartE2EDuration="1.948384504s" podCreationTimestamp="2025-11-28 06:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:41:55.94186626 +0000 UTC m=+1238.531121830" watchObservedRunningTime="2025-11-28 06:41:55.948384504 +0000 UTC m=+1238.537640064" Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.298636 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.394243 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.394464 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="dnsmasq-dns" containerID="cri-o://01b04c868eacb805b05d8fd55275cd2af4e8e5e60540db128aff93bac204f619" gracePeriod=10 Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.913275 4955 generic.go:334] "Generic (PLEG): container finished" podID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerID="01b04c868eacb805b05d8fd55275cd2af4e8e5e60540db128aff93bac204f619" exitCode=0 Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.913332 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" event={"ID":"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea","Type":"ContainerDied","Data":"01b04c868eacb805b05d8fd55275cd2af4e8e5e60540db128aff93bac204f619"} Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.914383 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" event={"ID":"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea","Type":"ContainerDied","Data":"f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f"} Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.914409 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9f0bb472bdcef0f21042db89bf8b60894eb625fb6ca3bc3ffe21cdff89c6e1f" Nov 28 06:41:56 crc kubenswrapper[4955]: I1128 06:41:56.924583 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053356 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053417 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053462 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053564 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlddp\" (UniqueName: \"kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053604 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.053720 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config\") pod \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\" (UID: \"6c0fc60a-bc19-418c-8a9c-8cf6aa10afea\") " Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.062716 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp" (OuterVolumeSpecName: "kube-api-access-tlddp") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "kube-api-access-tlddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.110109 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.114319 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config" (OuterVolumeSpecName: "config") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.123451 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.133813 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.145254 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" (UID: "6c0fc60a-bc19-418c-8a9c-8cf6aa10afea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156239 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156284 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156293 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156302 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlddp\" (UniqueName: \"kubernetes.io/projected/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-kube-api-access-tlddp\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156312 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.156322 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.921184 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-4lg9n" Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.946256 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:57 crc kubenswrapper[4955]: I1128 06:41:57.956038 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-4lg9n"] Nov 28 06:41:59 crc kubenswrapper[4955]: I1128 06:41:59.728984 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" path="/var/lib/kubelet/pods/6c0fc60a-bc19-418c-8a9c-8cf6aa10afea/volumes" Nov 28 06:42:00 crc kubenswrapper[4955]: I1128 06:42:00.954967 4955 generic.go:334] "Generic (PLEG): container finished" podID="dc75b27e-406c-4d68-868c-75c33da792ab" containerID="c26f3901b7a027d70391681e78315809639478f84c71b57747b4de3caa2be945" exitCode=0 Nov 28 06:42:00 crc kubenswrapper[4955]: I1128 06:42:00.955038 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sstdc" event={"ID":"dc75b27e-406c-4d68-868c-75c33da792ab","Type":"ContainerDied","Data":"c26f3901b7a027d70391681e78315809639478f84c71b57747b4de3caa2be945"} Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.397661 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.570629 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle\") pod \"dc75b27e-406c-4d68-868c-75c33da792ab\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.570928 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lmtf\" (UniqueName: \"kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf\") pod \"dc75b27e-406c-4d68-868c-75c33da792ab\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.571069 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data\") pod \"dc75b27e-406c-4d68-868c-75c33da792ab\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.571110 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts\") pod \"dc75b27e-406c-4d68-868c-75c33da792ab\" (UID: \"dc75b27e-406c-4d68-868c-75c33da792ab\") " Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.584144 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf" (OuterVolumeSpecName: "kube-api-access-8lmtf") pod "dc75b27e-406c-4d68-868c-75c33da792ab" (UID: "dc75b27e-406c-4d68-868c-75c33da792ab"). InnerVolumeSpecName "kube-api-access-8lmtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.584176 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts" (OuterVolumeSpecName: "scripts") pod "dc75b27e-406c-4d68-868c-75c33da792ab" (UID: "dc75b27e-406c-4d68-868c-75c33da792ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.602817 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc75b27e-406c-4d68-868c-75c33da792ab" (UID: "dc75b27e-406c-4d68-868c-75c33da792ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.606157 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data" (OuterVolumeSpecName: "config-data") pod "dc75b27e-406c-4d68-868c-75c33da792ab" (UID: "dc75b27e-406c-4d68-868c-75c33da792ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.672856 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.672883 4955 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.672891 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc75b27e-406c-4d68-868c-75c33da792ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.672902 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lmtf\" (UniqueName: \"kubernetes.io/projected/dc75b27e-406c-4d68-868c-75c33da792ab-kube-api-access-8lmtf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.984166 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sstdc" event={"ID":"dc75b27e-406c-4d68-868c-75c33da792ab","Type":"ContainerDied","Data":"60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69"} Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.984223 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60bcd788ddb4b157ac667c581fb259698439187bc70d2dbe297bf7391503df69" Nov 28 06:42:02 crc kubenswrapper[4955]: I1128 06:42:02.984787 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sstdc" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.176872 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.177180 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-log" containerID="cri-o://7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" gracePeriod=30 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.177310 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-api" containerID="cri-o://39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" gracePeriod=30 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.192401 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.192695 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerName="nova-scheduler-scheduler" containerID="cri-o://32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" gracePeriod=30 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.240017 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.240523 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" containerID="cri-o://07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041" gracePeriod=30 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.240663 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" containerID="cri-o://9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d" gracePeriod=30 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.775989 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896359 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrnng\" (UniqueName: \"kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896480 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896560 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896622 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896654 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.896717 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs\") pod \"8187545b-7dae-4b82-8ba0-06efecbef970\" (UID: \"8187545b-7dae-4b82-8ba0-06efecbef970\") " Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.899373 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs" (OuterVolumeSpecName: "logs") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.911841 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng" (OuterVolumeSpecName: "kube-api-access-rrnng") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "kube-api-access-rrnng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.928675 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.928782 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data" (OuterVolumeSpecName: "config-data") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.960979 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.961007 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8187545b-7dae-4b82-8ba0-06efecbef970" (UID: "8187545b-7dae-4b82-8ba0-06efecbef970"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993554 4955 generic.go:334] "Generic (PLEG): container finished" podID="8187545b-7dae-4b82-8ba0-06efecbef970" containerID="39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" exitCode=0 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993584 4955 generic.go:334] "Generic (PLEG): container finished" podID="8187545b-7dae-4b82-8ba0-06efecbef970" containerID="7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" exitCode=143 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993636 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerDied","Data":"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855"} Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993665 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerDied","Data":"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7"} Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993677 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8187545b-7dae-4b82-8ba0-06efecbef970","Type":"ContainerDied","Data":"a83b3f3c9a1c67a75b6c0e9b18c533b480247ac9e0be19944f85390298098ed3"} Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.993693 4955 scope.go:117] "RemoveContainer" containerID="39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.994380 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.995580 4955 generic.go:334] "Generic (PLEG): container finished" podID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerID="07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041" exitCode=143 Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.995627 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerDied","Data":"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041"} Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998760 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8187545b-7dae-4b82-8ba0-06efecbef970-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998781 4955 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998794 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998803 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998821 4955 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8187545b-7dae-4b82-8ba0-06efecbef970-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:03 crc kubenswrapper[4955]: I1128 06:42:03.998830 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrnng\" (UniqueName: \"kubernetes.io/projected/8187545b-7dae-4b82-8ba0-06efecbef970-kube-api-access-rrnng\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.025435 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.030869 4955 scope.go:117] "RemoveContainer" containerID="7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.034931 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.052935 4955 scope.go:117] "RemoveContainer" containerID="39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.053469 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855\": container with ID starting with 39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855 not found: ID does not exist" containerID="39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.053540 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855"} err="failed to get container status \"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855\": rpc error: code = NotFound desc = could not find container \"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855\": container with ID starting with 39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855 not found: ID does not exist" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.053573 4955 scope.go:117] "RemoveContainer" containerID="7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.054025 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7\": container with ID starting with 7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7 not found: ID does not exist" containerID="7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.054080 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7"} err="failed to get container status \"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7\": rpc error: code = NotFound desc = could not find container \"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7\": container with ID starting with 7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7 not found: ID does not exist" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.054116 4955 scope.go:117] "RemoveContainer" containerID="39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.054786 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855"} err="failed to get container status \"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855\": rpc error: code = NotFound desc = could not find container \"39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855\": container with ID starting with 39bf0ef628c2ec24ff7c5843c5b279d66f67435c4d60170d229fa4e81c3b4855 not found: ID does not exist" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.054823 4955 scope.go:117] "RemoveContainer" containerID="7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.055805 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7"} err="failed to get container status \"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7\": rpc error: code = NotFound desc = could not find container \"7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7\": container with ID starting with 7de721567f9b64659130763ca903e7e19637ebf2cbbeba32909be4866d3b5bd7 not found: ID does not exist" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.058795 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.059245 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="init" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059269 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="init" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.059288 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc75b27e-406c-4d68-868c-75c33da792ab" containerName="nova-manage" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059296 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc75b27e-406c-4d68-868c-75c33da792ab" containerName="nova-manage" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.059312 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="dnsmasq-dns" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059321 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="dnsmasq-dns" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.059331 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-log" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059339 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-log" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.059385 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-api" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059395 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-api" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059643 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c0fc60a-bc19-418c-8a9c-8cf6aa10afea" containerName="dnsmasq-dns" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059681 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc75b27e-406c-4d68-868c-75c33da792ab" containerName="nova-manage" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059701 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-log" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.059718 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" containerName="nova-api-api" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.060869 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.062731 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.062909 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.063594 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.081455 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.111788 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-config-data\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.111953 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.112003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bbb794-571b-4980-8445-7766a14bb5c9-logs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.112036 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.112067 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxjn9\" (UniqueName: \"kubernetes.io/projected/03bbb794-571b-4980-8445-7766a14bb5c9-kube-api-access-nxjn9\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.112099 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.151545 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.152848 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.154092 4955 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 06:42:04 crc kubenswrapper[4955]: E1128 06:42:04.154134 4955 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerName="nova-scheduler-scheduler" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216103 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-config-data\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216220 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216257 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bbb794-571b-4980-8445-7766a14bb5c9-logs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216277 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216303 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxjn9\" (UniqueName: \"kubernetes.io/projected/03bbb794-571b-4980-8445-7766a14bb5c9-kube-api-access-nxjn9\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216331 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.216919 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03bbb794-571b-4980-8445-7766a14bb5c9-logs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.223438 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-public-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.223534 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.223455 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.225179 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03bbb794-571b-4980-8445-7766a14bb5c9-config-data\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.233005 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxjn9\" (UniqueName: \"kubernetes.io/projected/03bbb794-571b-4980-8445-7766a14bb5c9-kube-api-access-nxjn9\") pod \"nova-api-0\" (UID: \"03bbb794-571b-4980-8445-7766a14bb5c9\") " pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.389011 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 06:42:04 crc kubenswrapper[4955]: I1128 06:42:04.887807 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 06:42:04 crc kubenswrapper[4955]: W1128 06:42:04.901545 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bbb794_571b_4980_8445_7766a14bb5c9.slice/crio-16f7a674bdf9d0016e5d607b5a2f7c005cb658b3f3b81c896cda8c9c4fc4b409 WatchSource:0}: Error finding container 16f7a674bdf9d0016e5d607b5a2f7c005cb658b3f3b81c896cda8c9c4fc4b409: Status 404 returned error can't find the container with id 16f7a674bdf9d0016e5d607b5a2f7c005cb658b3f3b81c896cda8c9c4fc4b409 Nov 28 06:42:05 crc kubenswrapper[4955]: I1128 06:42:05.011699 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03bbb794-571b-4980-8445-7766a14bb5c9","Type":"ContainerStarted","Data":"16f7a674bdf9d0016e5d607b5a2f7c005cb658b3f3b81c896cda8c9c4fc4b409"} Nov 28 06:42:05 crc kubenswrapper[4955]: I1128 06:42:05.719755 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8187545b-7dae-4b82-8ba0-06efecbef970" path="/var/lib/kubelet/pods/8187545b-7dae-4b82-8ba0-06efecbef970/volumes" Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.026155 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03bbb794-571b-4980-8445-7766a14bb5c9","Type":"ContainerStarted","Data":"7ffaaed44e781e8c82a737b9cb2b72c5b94a30b4c5b240a31237719677520e42"} Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.026216 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"03bbb794-571b-4980-8445-7766a14bb5c9","Type":"ContainerStarted","Data":"5ec8fb3850f319135b076a68203702b4e30238bf9dd3c65e7d3b2b023640899e"} Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.383137 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:59028->10.217.0.193:8775: read: connection reset by peer" Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.383150 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": read tcp 10.217.0.2:59016->10.217.0.193:8775: read: connection reset by peer" Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.928915 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:42:06 crc kubenswrapper[4955]: I1128 06:42:06.947292 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.947274133 podStartE2EDuration="2.947274133s" podCreationTimestamp="2025-11-28 06:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:42:06.054598887 +0000 UTC m=+1248.643854547" watchObservedRunningTime="2025-11-28 06:42:06.947274133 +0000 UTC m=+1249.536529703" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.036785 4955 generic.go:334] "Generic (PLEG): container finished" podID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerID="9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d" exitCode=0 Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.036845 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerDied","Data":"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d"} Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.036867 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.037134 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0","Type":"ContainerDied","Data":"c7f2a94984d753017e817ec136be218728bbd1baac22eea7eb3e0c500aee84c1"} Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.037164 4955 scope.go:117] "RemoveContainer" containerID="9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.054167 4955 scope.go:117] "RemoveContainer" containerID="07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.073875 4955 scope.go:117] "RemoveContainer" containerID="9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d" Nov 28 06:42:07 crc kubenswrapper[4955]: E1128 06:42:07.074393 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d\": container with ID starting with 9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d not found: ID does not exist" containerID="9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074444 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d"} err="failed to get container status \"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d\": rpc error: code = NotFound desc = could not find container \"9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d\": container with ID starting with 9c05f0d825593018ee7e89ff2f597e580bc95dc9922b342f09c2cfe4053ddf0d not found: ID does not exist" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074471 4955 scope.go:117] "RemoveContainer" containerID="07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074600 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data\") pod \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074706 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsm5\" (UniqueName: \"kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5\") pod \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " Nov 28 06:42:07 crc kubenswrapper[4955]: E1128 06:42:07.074802 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041\": container with ID starting with 07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041 not found: ID does not exist" containerID="07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074832 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041"} err="failed to get container status \"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041\": rpc error: code = NotFound desc = could not find container \"07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041\": container with ID starting with 07a049092f4c920a57154b9b490964c4475a344a481ee1a8109d2c765abbe041 not found: ID does not exist" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.074981 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs\") pod \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.075060 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs\") pod \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.075126 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle\") pod \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\" (UID: \"2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0\") " Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.075440 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs" (OuterVolumeSpecName: "logs") pod "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" (UID: "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.075888 4955 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-logs\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.086443 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5" (OuterVolumeSpecName: "kube-api-access-qpsm5") pod "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" (UID: "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0"). InnerVolumeSpecName "kube-api-access-qpsm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.101036 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data" (OuterVolumeSpecName: "config-data") pod "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" (UID: "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.102627 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" (UID: "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.128674 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" (UID: "2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.177920 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.177958 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.177972 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.177986 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsm5\" (UniqueName: \"kubernetes.io/projected/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0-kube-api-access-qpsm5\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.370149 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.381922 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.393820 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:07 crc kubenswrapper[4955]: E1128 06:42:07.394725 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.394836 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" Nov 28 06:42:07 crc kubenswrapper[4955]: E1128 06:42:07.394927 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.395013 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.395315 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-log" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.395407 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" containerName="nova-metadata-metadata" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.396667 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.403692 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.403781 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.405276 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.483619 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-config-data\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.483673 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.483774 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.483805 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-799sm\" (UniqueName: \"kubernetes.io/projected/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-kube-api-access-799sm\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.483901 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-logs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.585691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.585749 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-799sm\" (UniqueName: \"kubernetes.io/projected/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-kube-api-access-799sm\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.585834 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-logs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.585866 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-config-data\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.585884 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.586464 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-logs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.589733 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.601058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-config-data\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.601385 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.603250 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-799sm\" (UniqueName: \"kubernetes.io/projected/932b3fc6-dd61-4bcd-9836-f04de5a42ee7-kube-api-access-799sm\") pod \"nova-metadata-0\" (UID: \"932b3fc6-dd61-4bcd-9836-f04de5a42ee7\") " pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.722170 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 06:42:07 crc kubenswrapper[4955]: I1128 06:42:07.732442 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0" path="/var/lib/kubelet/pods/2a9f8816-6c9f-4c8c-8f34-80fbdaeeb9b0/volumes" Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.052233 4955 generic.go:334] "Generic (PLEG): container finished" podID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerID="32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" exitCode=0 Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.052344 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d1535-01f8-4a19-8381-fe1265f92331","Type":"ContainerDied","Data":"32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a"} Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.163264 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.244288 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.296515 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") pod \"dc4d1535-01f8-4a19-8381-fe1265f92331\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.296590 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle\") pod \"dc4d1535-01f8-4a19-8381-fe1265f92331\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.296644 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp6zv\" (UniqueName: \"kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv\") pod \"dc4d1535-01f8-4a19-8381-fe1265f92331\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.307705 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv" (OuterVolumeSpecName: "kube-api-access-hp6zv") pod "dc4d1535-01f8-4a19-8381-fe1265f92331" (UID: "dc4d1535-01f8-4a19-8381-fe1265f92331"). InnerVolumeSpecName "kube-api-access-hp6zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:08 crc kubenswrapper[4955]: E1128 06:42:08.338374 4955 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data podName:dc4d1535-01f8-4a19-8381-fe1265f92331 nodeName:}" failed. No retries permitted until 2025-11-28 06:42:08.838332585 +0000 UTC m=+1251.427588155 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data") pod "dc4d1535-01f8-4a19-8381-fe1265f92331" (UID: "dc4d1535-01f8-4a19-8381-fe1265f92331") : error deleting /var/lib/kubelet/pods/dc4d1535-01f8-4a19-8381-fe1265f92331/volume-subpaths: remove /var/lib/kubelet/pods/dc4d1535-01f8-4a19-8381-fe1265f92331/volume-subpaths: no such file or directory Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.341051 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc4d1535-01f8-4a19-8381-fe1265f92331" (UID: "dc4d1535-01f8-4a19-8381-fe1265f92331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.399856 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp6zv\" (UniqueName: \"kubernetes.io/projected/dc4d1535-01f8-4a19-8381-fe1265f92331-kube-api-access-hp6zv\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.399885 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.907531 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") pod \"dc4d1535-01f8-4a19-8381-fe1265f92331\" (UID: \"dc4d1535-01f8-4a19-8381-fe1265f92331\") " Nov 28 06:42:08 crc kubenswrapper[4955]: I1128 06:42:08.912024 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data" (OuterVolumeSpecName: "config-data") pod "dc4d1535-01f8-4a19-8381-fe1265f92331" (UID: "dc4d1535-01f8-4a19-8381-fe1265f92331"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.010054 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc4d1535-01f8-4a19-8381-fe1265f92331-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.068152 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc4d1535-01f8-4a19-8381-fe1265f92331","Type":"ContainerDied","Data":"cf0b53250fa1a369fad9b060b13cb292a7bdeb6b3e6bec4e3cd8da9ba3806bcb"} Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.068251 4955 scope.go:117] "RemoveContainer" containerID="32bd8b326acdf3708c77257a2097a6306a8d7dcbc5567190b421e1364bfcf47a" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.068181 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.072284 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"932b3fc6-dd61-4bcd-9836-f04de5a42ee7","Type":"ContainerStarted","Data":"80beae561833bc9ba3c9b3d4feee98e0b5fb020f1a5eca3715927e3cf6df1c94"} Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.072736 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"932b3fc6-dd61-4bcd-9836-f04de5a42ee7","Type":"ContainerStarted","Data":"21cb54a236e672c2090c84234a38c9e60aaf5e5e1abc711162e0ada1b7ffc0a1"} Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.072756 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"932b3fc6-dd61-4bcd-9836-f04de5a42ee7","Type":"ContainerStarted","Data":"247be2fa111f1e919ca1b4e5fc13cba123bb99aabb89a6ad82ea2be859474cea"} Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.094630 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.094614154 podStartE2EDuration="2.094614154s" podCreationTimestamp="2025-11-28 06:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:42:09.093644876 +0000 UTC m=+1251.682900466" watchObservedRunningTime="2025-11-28 06:42:09.094614154 +0000 UTC m=+1251.683869724" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.114762 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.128641 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.140986 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:09 crc kubenswrapper[4955]: E1128 06:42:09.141719 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerName="nova-scheduler-scheduler" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.141754 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerName="nova-scheduler-scheduler" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.142106 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" containerName="nova-scheduler-scheduler" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.143280 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.152610 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.175231 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.316535 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-config-data\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.316623 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.316683 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s756\" (UniqueName: \"kubernetes.io/projected/61ac9549-3394-4586-ae7d-afede69f862c-kube-api-access-2s756\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.418174 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s756\" (UniqueName: \"kubernetes.io/projected/61ac9549-3394-4586-ae7d-afede69f862c-kube-api-access-2s756\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.418322 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-config-data\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.418374 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.424281 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-config-data\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.425089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ac9549-3394-4586-ae7d-afede69f862c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.449835 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s756\" (UniqueName: \"kubernetes.io/projected/61ac9549-3394-4586-ae7d-afede69f862c-kube-api-access-2s756\") pod \"nova-scheduler-0\" (UID: \"61ac9549-3394-4586-ae7d-afede69f862c\") " pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.513552 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.718702 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc4d1535-01f8-4a19-8381-fe1265f92331" path="/var/lib/kubelet/pods/dc4d1535-01f8-4a19-8381-fe1265f92331/volumes" Nov 28 06:42:09 crc kubenswrapper[4955]: I1128 06:42:09.987692 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 06:42:10 crc kubenswrapper[4955]: I1128 06:42:10.089193 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61ac9549-3394-4586-ae7d-afede69f862c","Type":"ContainerStarted","Data":"ce871c548e1abb7ce4939b9363f8cefab5aafc4e9ba0cc5fcb892310911b235a"} Nov 28 06:42:11 crc kubenswrapper[4955]: I1128 06:42:11.107172 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61ac9549-3394-4586-ae7d-afede69f862c","Type":"ContainerStarted","Data":"7cf7ada026a980d47aa659b344d3d1abdb743fa70440c415da031f7b1754e7ae"} Nov 28 06:42:11 crc kubenswrapper[4955]: I1128 06:42:11.135877 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.135848192 podStartE2EDuration="2.135848192s" podCreationTimestamp="2025-11-28 06:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:42:11.131425418 +0000 UTC m=+1253.720681078" watchObservedRunningTime="2025-11-28 06:42:11.135848192 +0000 UTC m=+1253.725103802" Nov 28 06:42:12 crc kubenswrapper[4955]: I1128 06:42:12.723481 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:42:12 crc kubenswrapper[4955]: I1128 06:42:12.723543 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 06:42:14 crc kubenswrapper[4955]: I1128 06:42:14.390286 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:42:14 crc kubenswrapper[4955]: I1128 06:42:14.390624 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 06:42:14 crc kubenswrapper[4955]: I1128 06:42:14.514008 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 06:42:15 crc kubenswrapper[4955]: I1128 06:42:15.409769 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="03bbb794-571b-4980-8445-7766a14bb5c9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:42:15 crc kubenswrapper[4955]: I1128 06:42:15.409813 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="03bbb794-571b-4980-8445-7766a14bb5c9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:42:17 crc kubenswrapper[4955]: I1128 06:42:17.728016 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 06:42:17 crc kubenswrapper[4955]: I1128 06:42:17.728091 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 06:42:18 crc kubenswrapper[4955]: I1128 06:42:18.736762 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="932b3fc6-dd61-4bcd-9836-f04de5a42ee7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:42:18 crc kubenswrapper[4955]: I1128 06:42:18.737307 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="932b3fc6-dd61-4bcd-9836-f04de5a42ee7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 06:42:19 crc kubenswrapper[4955]: I1128 06:42:19.513904 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 06:42:19 crc kubenswrapper[4955]: I1128 06:42:19.557617 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 06:42:20 crc kubenswrapper[4955]: I1128 06:42:20.294591 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 06:42:21 crc kubenswrapper[4955]: I1128 06:42:21.256812 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 06:42:23 crc kubenswrapper[4955]: I1128 06:42:23.392484 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:42:23 crc kubenswrapper[4955]: I1128 06:42:23.392876 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:42:24 crc kubenswrapper[4955]: I1128 06:42:24.397192 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 06:42:24 crc kubenswrapper[4955]: I1128 06:42:24.397786 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 06:42:24 crc kubenswrapper[4955]: I1128 06:42:24.402098 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 06:42:24 crc kubenswrapper[4955]: I1128 06:42:24.403568 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 06:42:25 crc kubenswrapper[4955]: I1128 06:42:25.304705 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 06:42:25 crc kubenswrapper[4955]: I1128 06:42:25.313396 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 06:42:27 crc kubenswrapper[4955]: I1128 06:42:27.732139 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 06:42:27 crc kubenswrapper[4955]: I1128 06:42:27.733651 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 06:42:27 crc kubenswrapper[4955]: I1128 06:42:27.745967 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 06:42:28 crc kubenswrapper[4955]: I1128 06:42:28.340731 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 06:42:36 crc kubenswrapper[4955]: I1128 06:42:36.208806 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:37 crc kubenswrapper[4955]: I1128 06:42:37.135708 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:40 crc kubenswrapper[4955]: I1128 06:42:40.435084 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="rabbitmq" containerID="cri-o://56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a" gracePeriod=604796 Nov 28 06:42:41 crc kubenswrapper[4955]: I1128 06:42:41.256273 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="rabbitmq" containerID="cri-o://ea41fb5c23be427cafc3835ec29e1540540c8dc4595aea2006d9f2a5e0cea344" gracePeriod=604796 Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.081392 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.223427 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.223926 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224056 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224674 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224733 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224738 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224803 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.224928 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.225323 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9bvd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.225399 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.225724 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.226036 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.226099 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data\") pod \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\" (UID: \"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.226872 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.227421 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.227455 4955 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.227473 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.231145 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.231473 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.231596 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd" (OuterVolumeSpecName: "kube-api-access-c9bvd") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "kube-api-access-c9bvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.231700 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info" (OuterVolumeSpecName: "pod-info") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.245519 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.288288 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data" (OuterVolumeSpecName: "config-data") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.288982 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf" (OuterVolumeSpecName: "server-conf") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330569 4955 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330604 4955 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330615 4955 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330623 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9bvd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-kube-api-access-c9bvd\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330642 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330652 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.330661 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.458127 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.490795 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" (UID: "9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.534909 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.534942 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.613071 4955 generic.go:334] "Generic (PLEG): container finished" podID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerID="56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a" exitCode=0 Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.613165 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerDied","Data":"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a"} Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.613213 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d","Type":"ContainerDied","Data":"c109bada6b39e1d00accd6d831cce48494a6d223c2d0c733ce8311fb1b7b6429"} Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.613231 4955 scope.go:117] "RemoveContainer" containerID="56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.613181 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.617650 4955 generic.go:334] "Generic (PLEG): container finished" podID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerID="ea41fb5c23be427cafc3835ec29e1540540c8dc4595aea2006d9f2a5e0cea344" exitCode=0 Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.617674 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerDied","Data":"ea41fb5c23be427cafc3835ec29e1540540c8dc4595aea2006d9f2a5e0cea344"} Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.644544 4955 scope.go:117] "RemoveContainer" containerID="237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.647110 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.659069 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.738982 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" path="/var/lib/kubelet/pods/9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d/volumes" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.739772 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:47 crc kubenswrapper[4955]: E1128 06:42:47.740164 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="rabbitmq" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.740181 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="rabbitmq" Nov 28 06:42:47 crc kubenswrapper[4955]: E1128 06:42:47.740212 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="setup-container" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.740220 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="setup-container" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.740460 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa0b93e-4a0b-4c74-ab33-10f2f6734d1d" containerName="rabbitmq" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.742150 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750079 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750186 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750079 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750190 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750409 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9x7gk" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750494 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.750595 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.756000 4955 scope.go:117] "RemoveContainer" containerID="56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a" Nov 28 06:42:47 crc kubenswrapper[4955]: E1128 06:42:47.761344 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a\": container with ID starting with 56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a not found: ID does not exist" containerID="56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.761388 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a"} err="failed to get container status \"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a\": rpc error: code = NotFound desc = could not find container \"56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a\": container with ID starting with 56c8e030080f64aec9e790a35504399f9ded288f881eec5e7bb15f0830618e1a not found: ID does not exist" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.761416 4955 scope.go:117] "RemoveContainer" containerID="237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4" Nov 28 06:42:47 crc kubenswrapper[4955]: E1128 06:42:47.761708 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4\": container with ID starting with 237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4 not found: ID does not exist" containerID="237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.761740 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4"} err="failed to get container status \"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4\": rpc error: code = NotFound desc = could not find container \"237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4\": container with ID starting with 237830c7be8bca486c63b696fc08cb18ca5fca2ccf24d73e99dca8ff49f9aea4 not found: ID does not exist" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.772448 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.851636 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.851997 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852019 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852043 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852068 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852084 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8677f8b0-5621-470c-826f-1c2f9725c6d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852136 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852165 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjm4t\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-kube-api-access-bjm4t\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852209 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852229 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.852253 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8677f8b0-5621-470c-826f-1c2f9725c6d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.886869 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953793 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953862 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953889 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953911 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953939 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.953967 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8bnf\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954040 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954091 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954113 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954138 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954172 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd\") pod \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\" (UID: \"f22c44d9-b740-4aaf-bf4f-19eea62e6b42\") " Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954420 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954457 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjm4t\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-kube-api-access-bjm4t\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954517 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954535 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954556 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8677f8b0-5621-470c-826f-1c2f9725c6d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954639 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954662 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954677 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954692 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954711 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.954725 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8677f8b0-5621-470c-826f-1c2f9725c6d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.955927 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.956702 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-config-data\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.957062 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.957364 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.959367 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.960338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.960783 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.961645 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.962477 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.963394 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8677f8b0-5621-470c-826f-1c2f9725c6d7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.963608 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.964735 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8677f8b0-5621-470c-826f-1c2f9725c6d7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.966891 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.967859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.967892 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info" (OuterVolumeSpecName: "pod-info") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.969219 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8677f8b0-5621-470c-826f-1c2f9725c6d7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.969938 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.973187 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf" (OuterVolumeSpecName: "kube-api-access-s8bnf") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "kube-api-access-s8bnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:47 crc kubenswrapper[4955]: I1128 06:42:47.996832 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjm4t\" (UniqueName: \"kubernetes.io/projected/8677f8b0-5621-470c-826f-1c2f9725c6d7-kube-api-access-bjm4t\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.019056 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data" (OuterVolumeSpecName: "config-data") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.045922 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf" (OuterVolumeSpecName: "server-conf") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.046153 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"8677f8b0-5621-470c-826f-1c2f9725c6d7\") " pod="openstack/rabbitmq-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062551 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062584 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062595 4955 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062628 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062639 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8bnf\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-kube-api-access-s8bnf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062648 4955 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062657 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062666 4955 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062674 4955 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.062682 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.078402 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.089467 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.105818 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f22c44d9-b740-4aaf-bf4f-19eea62e6b42" (UID: "f22c44d9-b740-4aaf-bf4f-19eea62e6b42"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.164014 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.164043 4955 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f22c44d9-b740-4aaf-bf4f-19eea62e6b42-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:48 crc kubenswrapper[4955]: W1128 06:42:48.541607 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8677f8b0_5621_470c_826f_1c2f9725c6d7.slice/crio-6de9a1ba442d73eeeffc3c17c969c9e7621c531f81d4cfaaddc77e8918e27039 WatchSource:0}: Error finding container 6de9a1ba442d73eeeffc3c17c969c9e7621c531f81d4cfaaddc77e8918e27039: Status 404 returned error can't find the container with id 6de9a1ba442d73eeeffc3c17c969c9e7621c531f81d4cfaaddc77e8918e27039 Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.543264 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.631829 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f22c44d9-b740-4aaf-bf4f-19eea62e6b42","Type":"ContainerDied","Data":"ed3b602cb3bd403aef0135df806b2b7fb8fff09a2cf9fb358cb6ccf8d3fc8fea"} Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.631887 4955 scope.go:117] "RemoveContainer" containerID="ea41fb5c23be427cafc3835ec29e1540540c8dc4595aea2006d9f2a5e0cea344" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.632071 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.635126 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8677f8b0-5621-470c-826f-1c2f9725c6d7","Type":"ContainerStarted","Data":"6de9a1ba442d73eeeffc3c17c969c9e7621c531f81d4cfaaddc77e8918e27039"} Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.662768 4955 scope.go:117] "RemoveContainer" containerID="d87219f0bc006bb8d8315faf869bcf63563d1364fc23cc98cb44a72498571ff2" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.685200 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.694377 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.718829 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:48 crc kubenswrapper[4955]: E1128 06:42:48.719365 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="setup-container" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.719386 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="setup-container" Nov 28 06:42:48 crc kubenswrapper[4955]: E1128 06:42:48.719425 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="rabbitmq" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.719434 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="rabbitmq" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.719659 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" containerName="rabbitmq" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.721040 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.723380 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.723564 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.723850 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.724027 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vhjwn" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.724345 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.725122 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.726092 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.738819 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.880774 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.880839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.880922 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r5jt\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-kube-api-access-4r5jt\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.880962 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881005 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881249 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881361 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881395 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881433 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881672 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.881760 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984259 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984376 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984454 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984496 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984592 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r5jt\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-kube-api-access-4r5jt\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984640 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984775 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984830 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984861 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.984894 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.985157 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.985310 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.986100 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.986771 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.986845 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.987232 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.990486 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:48 crc kubenswrapper[4955]: I1128 06:42:48.990583 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:48.997001 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.004280 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.009073 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r5jt\" (UniqueName: \"kubernetes.io/projected/c326a903-f8eb-4e06-a44b-ae3bca93e0b6-kube-api-access-4r5jt\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.014736 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c326a903-f8eb-4e06-a44b-ae3bca93e0b6\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.048211 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.482688 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 06:42:49 crc kubenswrapper[4955]: W1128 06:42:49.489426 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc326a903_f8eb_4e06_a44b_ae3bca93e0b6.slice/crio-c2f6cf9996350b7ef1183cd34436b406df1f2a7b3e421947b63b1da3ad10de11 WatchSource:0}: Error finding container c2f6cf9996350b7ef1183cd34436b406df1f2a7b3e421947b63b1da3ad10de11: Status 404 returned error can't find the container with id c2f6cf9996350b7ef1183cd34436b406df1f2a7b3e421947b63b1da3ad10de11 Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.651035 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c326a903-f8eb-4e06-a44b-ae3bca93e0b6","Type":"ContainerStarted","Data":"c2f6cf9996350b7ef1183cd34436b406df1f2a7b3e421947b63b1da3ad10de11"} Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.720576 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f22c44d9-b740-4aaf-bf4f-19eea62e6b42" path="/var/lib/kubelet/pods/f22c44d9-b740-4aaf-bf4f-19eea62e6b42/volumes" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.807682 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-h7llt"] Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.813211 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.817051 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.830987 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-h7llt"] Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912730 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912767 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912873 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmll8\" (UniqueName: \"kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912895 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912921 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912940 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.912956 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.927348 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-h7llt"] Nov 28 06:42:49 crc kubenswrapper[4955]: E1128 06:42:49.928010 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-wmll8 openstack-edpm-ipam ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" podUID="71c80ffe-aa2c-4ddb-838c-daea7336b737" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.951124 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-jb4z6"] Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.953018 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:49 crc kubenswrapper[4955]: I1128 06:42:49.971776 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-jb4z6"] Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.015732 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmll8\" (UniqueName: \"kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.015824 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.015886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.015926 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.015952 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.016067 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.016091 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.017975 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.018770 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.019875 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.020941 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.022207 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.022312 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.033533 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmll8\" (UniqueName: \"kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8\") pod \"dnsmasq-dns-67b789f86c-h7llt\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118030 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118090 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pvq\" (UniqueName: \"kubernetes.io/projected/5eb6e022-3f20-498e-ac8d-8fed796ff122-kube-api-access-86pvq\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118193 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118222 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118242 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-config\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118258 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.118286 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219637 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219710 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219734 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-config\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219781 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219811 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.219853 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pvq\" (UniqueName: \"kubernetes.io/projected/5eb6e022-3f20-498e-ac8d-8fed796ff122-kube-api-access-86pvq\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.220988 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.221188 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.221482 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.222028 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-config\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.226272 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.226598 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5eb6e022-3f20-498e-ac8d-8fed796ff122-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.237033 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pvq\" (UniqueName: \"kubernetes.io/projected/5eb6e022-3f20-498e-ac8d-8fed796ff122-kube-api-access-86pvq\") pod \"dnsmasq-dns-cb6ffcf87-jb4z6\" (UID: \"5eb6e022-3f20-498e-ac8d-8fed796ff122\") " pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.276398 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.662221 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.663238 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8677f8b0-5621-470c-826f-1c2f9725c6d7","Type":"ContainerStarted","Data":"014b0278c7907ed4ad137b4075fab03ffe7f2944e1e200bd36018322c021057c"} Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.678308 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.779890 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-jb4z6"] Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.828301 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.828807 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.829136 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.829240 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmll8\" (UniqueName: \"kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.829629 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.829699 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.829909 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.830033 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.830355 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.830574 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.831257 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0\") pod \"71c80ffe-aa2c-4ddb-838c-daea7336b737\" (UID: \"71c80ffe-aa2c-4ddb-838c-daea7336b737\") " Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.831202 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config" (OuterVolumeSpecName: "config") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.831969 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.833030 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8" (OuterVolumeSpecName: "kube-api-access-wmll8") pod "71c80ffe-aa2c-4ddb-838c-daea7336b737" (UID: "71c80ffe-aa2c-4ddb-838c-daea7336b737"). InnerVolumeSpecName "kube-api-access-wmll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837344 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837380 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837393 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmll8\" (UniqueName: \"kubernetes.io/projected/71c80ffe-aa2c-4ddb-838c-daea7336b737-kube-api-access-wmll8\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837408 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837420 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837433 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:50 crc kubenswrapper[4955]: I1128 06:42:50.837444 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/71c80ffe-aa2c-4ddb-838c-daea7336b737-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:42:51 crc kubenswrapper[4955]: I1128 06:42:51.672163 4955 generic.go:334] "Generic (PLEG): container finished" podID="5eb6e022-3f20-498e-ac8d-8fed796ff122" containerID="24fc631763924ca9beadfe5c5dec44b7adb2fc02cdad6b5b2be0ef1662c9b814" exitCode=0 Nov 28 06:42:51 crc kubenswrapper[4955]: I1128 06:42:51.672308 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" event={"ID":"5eb6e022-3f20-498e-ac8d-8fed796ff122","Type":"ContainerDied","Data":"24fc631763924ca9beadfe5c5dec44b7adb2fc02cdad6b5b2be0ef1662c9b814"} Nov 28 06:42:51 crc kubenswrapper[4955]: I1128 06:42:51.672346 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" event={"ID":"5eb6e022-3f20-498e-ac8d-8fed796ff122","Type":"ContainerStarted","Data":"0fab0678757b7ab2485beb2e9549bdd200c5c834289898210c8ea5f3dd908499"} Nov 28 06:42:51 crc kubenswrapper[4955]: I1128 06:42:51.674599 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:42:51 crc kubenswrapper[4955]: I1128 06:42:51.674919 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c326a903-f8eb-4e06-a44b-ae3bca93e0b6","Type":"ContainerStarted","Data":"d75bff7a52ad9022b32e0b48996336e9ec93c8e7eebd13bcc8ec3cad5e0a6a35"} Nov 28 06:42:52 crc kubenswrapper[4955]: I1128 06:42:52.685700 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" event={"ID":"5eb6e022-3f20-498e-ac8d-8fed796ff122","Type":"ContainerStarted","Data":"aaeb28b4793f21a424b82b41345a822bb951185a9d3aac65215d204c7e29bbe1"} Nov 28 06:42:52 crc kubenswrapper[4955]: I1128 06:42:52.718454 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" podStartSLOduration=3.718431374 podStartE2EDuration="3.718431374s" podCreationTimestamp="2025-11-28 06:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:42:52.716282043 +0000 UTC m=+1295.305537663" watchObservedRunningTime="2025-11-28 06:42:52.718431374 +0000 UTC m=+1295.307686944" Nov 28 06:42:53 crc kubenswrapper[4955]: I1128 06:42:53.393278 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:42:53 crc kubenswrapper[4955]: I1128 06:42:53.393332 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:42:53 crc kubenswrapper[4955]: I1128 06:42:53.695881 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:43:00 crc kubenswrapper[4955]: I1128 06:43:00.278861 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-jb4z6" Nov 28 06:43:00 crc kubenswrapper[4955]: I1128 06:43:00.363590 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:43:00 crc kubenswrapper[4955]: I1128 06:43:00.363886 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="dnsmasq-dns" containerID="cri-o://af2c3a34ac482df3be679ac7702db3fdf0e67ea2d75e5ebbcf39fcb6bcdd8125" gracePeriod=10 Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:00.792188 4955 generic.go:334] "Generic (PLEG): container finished" podID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerID="af2c3a34ac482df3be679ac7702db3fdf0e67ea2d75e5ebbcf39fcb6bcdd8125" exitCode=0 Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:00.792270 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" event={"ID":"df00d334-73f9-4bec-9dd6-99e06f16e4bc","Type":"ContainerDied","Data":"af2c3a34ac482df3be679ac7702db3fdf0e67ea2d75e5ebbcf39fcb6bcdd8125"} Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:00.894472 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090477 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090555 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090605 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090709 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090757 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhzmx\" (UniqueName: \"kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.090784 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb\") pod \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\" (UID: \"df00d334-73f9-4bec-9dd6-99e06f16e4bc\") " Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.101013 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx" (OuterVolumeSpecName: "kube-api-access-dhzmx") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "kube-api-access-dhzmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.160216 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.161631 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config" (OuterVolumeSpecName: "config") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.166132 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.169852 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.169901 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df00d334-73f9-4bec-9dd6-99e06f16e4bc" (UID: "df00d334-73f9-4bec-9dd6-99e06f16e4bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193256 4955 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193285 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhzmx\" (UniqueName: \"kubernetes.io/projected/df00d334-73f9-4bec-9dd6-99e06f16e4bc-kube-api-access-dhzmx\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193299 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193307 4955 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193317 4955 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.193326 4955 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df00d334-73f9-4bec-9dd6-99e06f16e4bc-config\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.812669 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" event={"ID":"df00d334-73f9-4bec-9dd6-99e06f16e4bc","Type":"ContainerDied","Data":"67a20940f6cb8e5527595ff3837a0605759bd6c784c8994961b11288ac90b0c0"} Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.812777 4955 scope.go:117] "RemoveContainer" containerID="af2c3a34ac482df3be679ac7702db3fdf0e67ea2d75e5ebbcf39fcb6bcdd8125" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.813015 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-qwl6p" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.846860 4955 scope.go:117] "RemoveContainer" containerID="71fe32ff4ee2ec6be007e8cb7a45902df6f797346aed76543555a61eac6bf738" Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.853210 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:43:01 crc kubenswrapper[4955]: I1128 06:43:01.863151 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-qwl6p"] Nov 28 06:43:03 crc kubenswrapper[4955]: I1128 06:43:03.725707 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" path="/var/lib/kubelet/pods/df00d334-73f9-4bec-9dd6-99e06f16e4bc/volumes" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.128556 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9"] Nov 28 06:43:09 crc kubenswrapper[4955]: E1128 06:43:09.129560 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="init" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.129577 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="init" Nov 28 06:43:09 crc kubenswrapper[4955]: E1128 06:43:09.129608 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="dnsmasq-dns" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.129617 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="dnsmasq-dns" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.129868 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="df00d334-73f9-4bec-9dd6-99e06f16e4bc" containerName="dnsmasq-dns" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.130794 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.135303 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.135648 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.136619 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.147193 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.156482 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9"] Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.258208 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.258415 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.258711 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.258812 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6qp\" (UniqueName: \"kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.361186 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.361296 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs6qp\" (UniqueName: \"kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.361388 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.361554 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.367691 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.369108 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.374460 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.382821 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs6qp\" (UniqueName: \"kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:09 crc kubenswrapper[4955]: I1128 06:43:09.502588 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:10 crc kubenswrapper[4955]: I1128 06:43:10.044374 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9"] Nov 28 06:43:10 crc kubenswrapper[4955]: W1128 06:43:10.048785 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54f3d846_b19a_415e_93bb_9f4c1a3e02dc.slice/crio-e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8 WatchSource:0}: Error finding container e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8: Status 404 returned error can't find the container with id e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8 Nov 28 06:43:10 crc kubenswrapper[4955]: I1128 06:43:10.052606 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:43:10 crc kubenswrapper[4955]: I1128 06:43:10.919868 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" event={"ID":"54f3d846-b19a-415e-93bb-9f4c1a3e02dc","Type":"ContainerStarted","Data":"e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8"} Nov 28 06:43:18 crc kubenswrapper[4955]: I1128 06:43:18.160241 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:43:19 crc kubenswrapper[4955]: I1128 06:43:19.119934 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" event={"ID":"54f3d846-b19a-415e-93bb-9f4c1a3e02dc","Type":"ContainerStarted","Data":"8fc201ba203996728a63811f5d4dfb7170b01377593ada810fa6ba274f3b065c"} Nov 28 06:43:19 crc kubenswrapper[4955]: I1128 06:43:19.177275 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" podStartSLOduration=2.071466659 podStartE2EDuration="10.177242123s" podCreationTimestamp="2025-11-28 06:43:09 +0000 UTC" firstStartedPulling="2025-11-28 06:43:10.052329087 +0000 UTC m=+1312.641584657" lastFinishedPulling="2025-11-28 06:43:18.158104561 +0000 UTC m=+1320.747360121" observedRunningTime="2025-11-28 06:43:19.150757817 +0000 UTC m=+1321.740013417" watchObservedRunningTime="2025-11-28 06:43:19.177242123 +0000 UTC m=+1321.766497753" Nov 28 06:43:21 crc kubenswrapper[4955]: I1128 06:43:21.851478 4955 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod71c80ffe-aa2c-4ddb-838c-daea7336b737"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod71c80ffe-aa2c-4ddb-838c-daea7336b737] : Timed out while waiting for systemd to remove kubepods-besteffort-pod71c80ffe_aa2c_4ddb_838c_daea7336b737.slice" Nov 28 06:43:21 crc kubenswrapper[4955]: E1128 06:43:21.851939 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod71c80ffe-aa2c-4ddb-838c-daea7336b737] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod71c80ffe-aa2c-4ddb-838c-daea7336b737] : Timed out while waiting for systemd to remove kubepods-besteffort-pod71c80ffe_aa2c_4ddb_838c_daea7336b737.slice" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" podUID="71c80ffe-aa2c-4ddb-838c-daea7336b737" Nov 28 06:43:22 crc kubenswrapper[4955]: I1128 06:43:22.149395 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-h7llt" Nov 28 06:43:22 crc kubenswrapper[4955]: I1128 06:43:22.216296 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-h7llt"] Nov 28 06:43:22 crc kubenswrapper[4955]: I1128 06:43:22.226907 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-h7llt"] Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.172754 4955 generic.go:334] "Generic (PLEG): container finished" podID="c326a903-f8eb-4e06-a44b-ae3bca93e0b6" containerID="d75bff7a52ad9022b32e0b48996336e9ec93c8e7eebd13bcc8ec3cad5e0a6a35" exitCode=0 Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.172835 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c326a903-f8eb-4e06-a44b-ae3bca93e0b6","Type":"ContainerDied","Data":"d75bff7a52ad9022b32e0b48996336e9ec93c8e7eebd13bcc8ec3cad5e0a6a35"} Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.176852 4955 generic.go:334] "Generic (PLEG): container finished" podID="8677f8b0-5621-470c-826f-1c2f9725c6d7" containerID="014b0278c7907ed4ad137b4075fab03ffe7f2944e1e200bd36018322c021057c" exitCode=0 Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.176933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8677f8b0-5621-470c-826f-1c2f9725c6d7","Type":"ContainerDied","Data":"014b0278c7907ed4ad137b4075fab03ffe7f2944e1e200bd36018322c021057c"} Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.394093 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.394384 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.394423 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.395125 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.395175 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584" gracePeriod=600 Nov 28 06:43:23 crc kubenswrapper[4955]: I1128 06:43:23.714571 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c80ffe-aa2c-4ddb-838c-daea7336b737" path="/var/lib/kubelet/pods/71c80ffe-aa2c-4ddb-838c-daea7336b737/volumes" Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.193607 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8677f8b0-5621-470c-826f-1c2f9725c6d7","Type":"ContainerStarted","Data":"517690fc46cf828c545e69bc353494c893864852146ac221465721ccffbb5464"} Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.195254 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.201131 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584" exitCode=0 Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.201213 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584"} Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.201266 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476"} Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.201284 4955 scope.go:117] "RemoveContainer" containerID="5a33364ffc1fcadc84c98fe0fe29a3e3b087189f2758e47ffce1858ea966d6d9" Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.206420 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c326a903-f8eb-4e06-a44b-ae3bca93e0b6","Type":"ContainerStarted","Data":"78d71eb715f3e7be17e267ccde21eecfdb31061108f0b95c560ba855d860d4db"} Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.206760 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.228801 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.228778991 podStartE2EDuration="37.228778991s" podCreationTimestamp="2025-11-28 06:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:43:24.224579282 +0000 UTC m=+1326.813834912" watchObservedRunningTime="2025-11-28 06:43:24.228778991 +0000 UTC m=+1326.818034581" Nov 28 06:43:24 crc kubenswrapper[4955]: I1128 06:43:24.292302 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.292262941 podStartE2EDuration="36.292262941s" podCreationTimestamp="2025-11-28 06:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 06:43:24.280488179 +0000 UTC m=+1326.869743819" watchObservedRunningTime="2025-11-28 06:43:24.292262941 +0000 UTC m=+1326.881518531" Nov 28 06:43:31 crc kubenswrapper[4955]: I1128 06:43:31.289041 4955 generic.go:334] "Generic (PLEG): container finished" podID="54f3d846-b19a-415e-93bb-9f4c1a3e02dc" containerID="8fc201ba203996728a63811f5d4dfb7170b01377593ada810fa6ba274f3b065c" exitCode=0 Nov 28 06:43:31 crc kubenswrapper[4955]: I1128 06:43:31.289252 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" event={"ID":"54f3d846-b19a-415e-93bb-9f4c1a3e02dc","Type":"ContainerDied","Data":"8fc201ba203996728a63811f5d4dfb7170b01377593ada810fa6ba274f3b065c"} Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.782675 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.855177 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs6qp\" (UniqueName: \"kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp\") pod \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.855605 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory\") pod \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.855664 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key\") pod \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.855744 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle\") pod \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\" (UID: \"54f3d846-b19a-415e-93bb-9f4c1a3e02dc\") " Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.862732 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "54f3d846-b19a-415e-93bb-9f4c1a3e02dc" (UID: "54f3d846-b19a-415e-93bb-9f4c1a3e02dc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.862880 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp" (OuterVolumeSpecName: "kube-api-access-cs6qp") pod "54f3d846-b19a-415e-93bb-9f4c1a3e02dc" (UID: "54f3d846-b19a-415e-93bb-9f4c1a3e02dc"). InnerVolumeSpecName "kube-api-access-cs6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.886010 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "54f3d846-b19a-415e-93bb-9f4c1a3e02dc" (UID: "54f3d846-b19a-415e-93bb-9f4c1a3e02dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.895542 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory" (OuterVolumeSpecName: "inventory") pod "54f3d846-b19a-415e-93bb-9f4c1a3e02dc" (UID: "54f3d846-b19a-415e-93bb-9f4c1a3e02dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.959748 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs6qp\" (UniqueName: \"kubernetes.io/projected/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-kube-api-access-cs6qp\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.959782 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.959795 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:32 crc kubenswrapper[4955]: I1128 06:43:32.959808 4955 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f3d846-b19a-415e-93bb-9f4c1a3e02dc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.312539 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" event={"ID":"54f3d846-b19a-415e-93bb-9f4c1a3e02dc","Type":"ContainerDied","Data":"e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8"} Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.312579 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e30055da21da91392741461f5ace24696d4e375eb3134747db0cf043073ce4c8" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.312588 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.396721 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh"] Nov 28 06:43:33 crc kubenswrapper[4955]: E1128 06:43:33.397123 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f3d846-b19a-415e-93bb-9f4c1a3e02dc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.397138 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f3d846-b19a-415e-93bb-9f4c1a3e02dc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.397344 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f3d846-b19a-415e-93bb-9f4c1a3e02dc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.398024 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.401823 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.402033 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.402202 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.402889 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.407613 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh"] Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.469981 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.470140 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwczx\" (UniqueName: \"kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.470213 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.572148 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.572249 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.572318 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwczx\" (UniqueName: \"kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.580234 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.580348 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.586489 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwczx\" (UniqueName: \"kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-kk4kh\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:33 crc kubenswrapper[4955]: I1128 06:43:33.716223 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:34 crc kubenswrapper[4955]: I1128 06:43:34.258821 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh"] Nov 28 06:43:34 crc kubenswrapper[4955]: I1128 06:43:34.325180 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" event={"ID":"916114e1-c9f4-45af-acbd-14fa82b380ed","Type":"ContainerStarted","Data":"5ceb1d610ec7945fc3b03199941d2d9cf344b8771bd7b186e6b570aec8e8de86"} Nov 28 06:43:35 crc kubenswrapper[4955]: I1128 06:43:35.336161 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" event={"ID":"916114e1-c9f4-45af-acbd-14fa82b380ed","Type":"ContainerStarted","Data":"c5841d51772e70839fbfe5470a60d019a55a0431c8fd4510e1867e29f31c63f4"} Nov 28 06:43:35 crc kubenswrapper[4955]: I1128 06:43:35.367372 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" podStartSLOduration=1.913609222 podStartE2EDuration="2.367330139s" podCreationTimestamp="2025-11-28 06:43:33 +0000 UTC" firstStartedPulling="2025-11-28 06:43:34.276875554 +0000 UTC m=+1336.866131124" lastFinishedPulling="2025-11-28 06:43:34.730596461 +0000 UTC m=+1337.319852041" observedRunningTime="2025-11-28 06:43:35.354989681 +0000 UTC m=+1337.944245271" watchObservedRunningTime="2025-11-28 06:43:35.367330139 +0000 UTC m=+1337.956585719" Nov 28 06:43:38 crc kubenswrapper[4955]: I1128 06:43:38.082829 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 06:43:38 crc kubenswrapper[4955]: I1128 06:43:38.374438 4955 generic.go:334] "Generic (PLEG): container finished" podID="916114e1-c9f4-45af-acbd-14fa82b380ed" containerID="c5841d51772e70839fbfe5470a60d019a55a0431c8fd4510e1867e29f31c63f4" exitCode=0 Nov 28 06:43:38 crc kubenswrapper[4955]: I1128 06:43:38.374772 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" event={"ID":"916114e1-c9f4-45af-acbd-14fa82b380ed","Type":"ContainerDied","Data":"c5841d51772e70839fbfe5470a60d019a55a0431c8fd4510e1867e29f31c63f4"} Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.051942 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.822761 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.896172 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key\") pod \"916114e1-c9f4-45af-acbd-14fa82b380ed\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.896410 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory\") pod \"916114e1-c9f4-45af-acbd-14fa82b380ed\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.896446 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwczx\" (UniqueName: \"kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx\") pod \"916114e1-c9f4-45af-acbd-14fa82b380ed\" (UID: \"916114e1-c9f4-45af-acbd-14fa82b380ed\") " Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.902715 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx" (OuterVolumeSpecName: "kube-api-access-zwczx") pod "916114e1-c9f4-45af-acbd-14fa82b380ed" (UID: "916114e1-c9f4-45af-acbd-14fa82b380ed"). InnerVolumeSpecName "kube-api-access-zwczx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.921662 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "916114e1-c9f4-45af-acbd-14fa82b380ed" (UID: "916114e1-c9f4-45af-acbd-14fa82b380ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.928229 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory" (OuterVolumeSpecName: "inventory") pod "916114e1-c9f4-45af-acbd-14fa82b380ed" (UID: "916114e1-c9f4-45af-acbd-14fa82b380ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.999143 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.999175 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/916114e1-c9f4-45af-acbd-14fa82b380ed-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:39 crc kubenswrapper[4955]: I1128 06:43:39.999186 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwczx\" (UniqueName: \"kubernetes.io/projected/916114e1-c9f4-45af-acbd-14fa82b380ed-kube-api-access-zwczx\") on node \"crc\" DevicePath \"\"" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.399457 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" event={"ID":"916114e1-c9f4-45af-acbd-14fa82b380ed","Type":"ContainerDied","Data":"5ceb1d610ec7945fc3b03199941d2d9cf344b8771bd7b186e6b570aec8e8de86"} Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.399571 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ceb1d610ec7945fc3b03199941d2d9cf344b8771bd7b186e6b570aec8e8de86" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.399677 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-kk4kh" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.472189 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs"] Nov 28 06:43:40 crc kubenswrapper[4955]: E1128 06:43:40.472656 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916114e1-c9f4-45af-acbd-14fa82b380ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.472677 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="916114e1-c9f4-45af-acbd-14fa82b380ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.472912 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="916114e1-c9f4-45af-acbd-14fa82b380ed" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.473638 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.480128 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.480385 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.480619 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.480748 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.487183 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs"] Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.521885 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.521965 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.522054 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjfw\" (UniqueName: \"kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.522120 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.623905 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.624025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.624123 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjfw\" (UniqueName: \"kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.624198 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.629159 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.629228 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.629389 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.644324 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjfw\" (UniqueName: \"kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:40 crc kubenswrapper[4955]: I1128 06:43:40.819814 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:43:41 crc kubenswrapper[4955]: I1128 06:43:41.380213 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs"] Nov 28 06:43:41 crc kubenswrapper[4955]: I1128 06:43:41.421680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" event={"ID":"be0906bb-475c-4229-9a9f-9a5361e6172e","Type":"ContainerStarted","Data":"75b71198a684816581edee7ac41909aa377d7ba165ef30a79a2a880e2e12ad5b"} Nov 28 06:43:42 crc kubenswrapper[4955]: I1128 06:43:42.437884 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" event={"ID":"be0906bb-475c-4229-9a9f-9a5361e6172e","Type":"ContainerStarted","Data":"8033cbc14d1b2653e9178010e618283713a20044717b0d33a2340fc53c0c0672"} Nov 28 06:43:42 crc kubenswrapper[4955]: I1128 06:43:42.469464 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" podStartSLOduration=1.826401422 podStartE2EDuration="2.469444758s" podCreationTimestamp="2025-11-28 06:43:40 +0000 UTC" firstStartedPulling="2025-11-28 06:43:41.380071984 +0000 UTC m=+1343.969327554" lastFinishedPulling="2025-11-28 06:43:42.02311532 +0000 UTC m=+1344.612370890" observedRunningTime="2025-11-28 06:43:42.455672719 +0000 UTC m=+1345.044928329" watchObservedRunningTime="2025-11-28 06:43:42.469444758 +0000 UTC m=+1345.058700338" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.360290 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c272h"] Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.363280 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.375035 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c272h"] Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.530582 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc4tv\" (UniqueName: \"kubernetes.io/projected/1d951c94-8c04-495f-b294-92a4cb70cd63-kube-api-access-nc4tv\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.530713 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-catalog-content\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.530741 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-utilities\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.633378 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc4tv\" (UniqueName: \"kubernetes.io/projected/1d951c94-8c04-495f-b294-92a4cb70cd63-kube-api-access-nc4tv\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.633568 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-catalog-content\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.633595 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-utilities\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.634490 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-utilities\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.634559 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d951c94-8c04-495f-b294-92a4cb70cd63-catalog-content\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.658863 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc4tv\" (UniqueName: \"kubernetes.io/projected/1d951c94-8c04-495f-b294-92a4cb70cd63-kube-api-access-nc4tv\") pod \"certified-operators-c272h\" (UID: \"1d951c94-8c04-495f-b294-92a4cb70cd63\") " pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:23 crc kubenswrapper[4955]: I1128 06:44:23.695935 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:24 crc kubenswrapper[4955]: I1128 06:44:24.181184 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c272h"] Nov 28 06:44:24 crc kubenswrapper[4955]: I1128 06:44:24.892381 4955 generic.go:334] "Generic (PLEG): container finished" podID="1d951c94-8c04-495f-b294-92a4cb70cd63" containerID="e24d34892a08dec13cc87c3fb7c4a64826f6e882a5e9d2f946d40d4b7df87bd7" exitCode=0 Nov 28 06:44:24 crc kubenswrapper[4955]: I1128 06:44:24.892725 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c272h" event={"ID":"1d951c94-8c04-495f-b294-92a4cb70cd63","Type":"ContainerDied","Data":"e24d34892a08dec13cc87c3fb7c4a64826f6e882a5e9d2f946d40d4b7df87bd7"} Nov 28 06:44:24 crc kubenswrapper[4955]: I1128 06:44:24.892776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c272h" event={"ID":"1d951c94-8c04-495f-b294-92a4cb70cd63","Type":"ContainerStarted","Data":"6988bb844a3aeec01d27d61f949cc1c9da92e35f2484589fb177a74851bb4614"} Nov 28 06:44:25 crc kubenswrapper[4955]: I1128 06:44:25.001296 4955 scope.go:117] "RemoveContainer" containerID="8101e5ada4f68fbbea08c2b9de0f3e1037e29d658078b4e9a60ac3ec8a3eb327" Nov 28 06:44:30 crc kubenswrapper[4955]: I1128 06:44:30.949984 4955 generic.go:334] "Generic (PLEG): container finished" podID="1d951c94-8c04-495f-b294-92a4cb70cd63" containerID="4b4828ab60507db98b968fb3375d4353a7c7405aad5f0a178fd881d6ee7c44bd" exitCode=0 Nov 28 06:44:30 crc kubenswrapper[4955]: I1128 06:44:30.950107 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c272h" event={"ID":"1d951c94-8c04-495f-b294-92a4cb70cd63","Type":"ContainerDied","Data":"4b4828ab60507db98b968fb3375d4353a7c7405aad5f0a178fd881d6ee7c44bd"} Nov 28 06:44:31 crc kubenswrapper[4955]: I1128 06:44:31.961646 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c272h" event={"ID":"1d951c94-8c04-495f-b294-92a4cb70cd63","Type":"ContainerStarted","Data":"adf5c0a6ce9cd5b27c07a64f3dd4db093a576fa2738f706ccc9a61faf31da55c"} Nov 28 06:44:31 crc kubenswrapper[4955]: I1128 06:44:31.985286 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c272h" podStartSLOduration=2.536597778 podStartE2EDuration="8.985271649s" podCreationTimestamp="2025-11-28 06:44:23 +0000 UTC" firstStartedPulling="2025-11-28 06:44:24.898139402 +0000 UTC m=+1387.487395002" lastFinishedPulling="2025-11-28 06:44:31.346813283 +0000 UTC m=+1393.936068873" observedRunningTime="2025-11-28 06:44:31.983392376 +0000 UTC m=+1394.572647956" watchObservedRunningTime="2025-11-28 06:44:31.985271649 +0000 UTC m=+1394.574527219" Nov 28 06:44:33 crc kubenswrapper[4955]: I1128 06:44:33.715636 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:33 crc kubenswrapper[4955]: I1128 06:44:33.736912 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:33 crc kubenswrapper[4955]: I1128 06:44:33.791316 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:43 crc kubenswrapper[4955]: I1128 06:44:43.780273 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c272h" Nov 28 06:44:43 crc kubenswrapper[4955]: I1128 06:44:43.877650 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c272h"] Nov 28 06:44:43 crc kubenswrapper[4955]: I1128 06:44:43.945855 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:44:43 crc kubenswrapper[4955]: I1128 06:44:43.946115 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4q2nl" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="registry-server" containerID="cri-o://1b02acdf853ac9f2ef76b7dc4ac61f5c95a999233405abf8e95d070de41bf3a1" gracePeriod=2 Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.121593 4955 generic.go:334] "Generic (PLEG): container finished" podID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerID="1b02acdf853ac9f2ef76b7dc4ac61f5c95a999233405abf8e95d070de41bf3a1" exitCode=0 Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.121638 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerDied","Data":"1b02acdf853ac9f2ef76b7dc4ac61f5c95a999233405abf8e95d070de41bf3a1"} Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.430856 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.519949 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs9xn\" (UniqueName: \"kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn\") pod \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.520077 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities\") pod \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.520117 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content\") pod \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\" (UID: \"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae\") " Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.520836 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities" (OuterVolumeSpecName: "utilities") pod "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" (UID: "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.526714 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn" (OuterVolumeSpecName: "kube-api-access-xs9xn") pod "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" (UID: "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae"). InnerVolumeSpecName "kube-api-access-xs9xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.574650 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" (UID: "0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.622269 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.622297 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:44:44 crc kubenswrapper[4955]: I1128 06:44:44.622308 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs9xn\" (UniqueName: \"kubernetes.io/projected/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae-kube-api-access-xs9xn\") on node \"crc\" DevicePath \"\"" Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.142053 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4q2nl" event={"ID":"0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae","Type":"ContainerDied","Data":"59ebf05076bb37dcab9935f6eaea7d41b5cda2b5a1aa9d67ee73f9b551ca7448"} Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.142124 4955 scope.go:117] "RemoveContainer" containerID="1b02acdf853ac9f2ef76b7dc4ac61f5c95a999233405abf8e95d070de41bf3a1" Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.142364 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4q2nl" Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.161044 4955 scope.go:117] "RemoveContainer" containerID="a8ac454de89fb8d1c432484bfe08b77118d30aa827c1b2223f3fefaa542d9a94" Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.187362 4955 scope.go:117] "RemoveContainer" containerID="16d22873994a205943f59c9632900ac1d856cc6d13ec0944c4117fb443573dee" Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.225974 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.233743 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4q2nl"] Nov 28 06:44:45 crc kubenswrapper[4955]: I1128 06:44:45.723205 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" path="/var/lib/kubelet/pods/0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae/volumes" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.161561 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr"] Nov 28 06:45:00 crc kubenswrapper[4955]: E1128 06:45:00.164071 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="extract-content" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.164100 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="extract-content" Nov 28 06:45:00 crc kubenswrapper[4955]: E1128 06:45:00.164114 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="registry-server" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.164120 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="registry-server" Nov 28 06:45:00 crc kubenswrapper[4955]: E1128 06:45:00.164132 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="extract-utilities" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.164138 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="extract-utilities" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.164334 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d93d94f-56a0-44c9-9c1c-7d91f7b9d9ae" containerName="registry-server" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.165038 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.168884 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.170222 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.175145 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr"] Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.200587 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2ksb\" (UniqueName: \"kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.200666 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.200689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.302072 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.302126 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.302370 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2ksb\" (UniqueName: \"kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.303111 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.311247 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.319189 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2ksb\" (UniqueName: \"kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb\") pod \"collect-profiles-29405205-vqqmr\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.518176 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:00 crc kubenswrapper[4955]: I1128 06:45:00.970416 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr"] Nov 28 06:45:01 crc kubenswrapper[4955]: I1128 06:45:01.352498 4955 generic.go:334] "Generic (PLEG): container finished" podID="9b805d70-2eba-4e7f-af58-4c60699cc49e" containerID="96224d33337769c38bc4ecf81501d596cf98dd6bb644f7bf2fe541414c33b809" exitCode=0 Nov 28 06:45:01 crc kubenswrapper[4955]: I1128 06:45:01.352563 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" event={"ID":"9b805d70-2eba-4e7f-af58-4c60699cc49e","Type":"ContainerDied","Data":"96224d33337769c38bc4ecf81501d596cf98dd6bb644f7bf2fe541414c33b809"} Nov 28 06:45:01 crc kubenswrapper[4955]: I1128 06:45:01.352595 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" event={"ID":"9b805d70-2eba-4e7f-af58-4c60699cc49e","Type":"ContainerStarted","Data":"e52bebfb3cbad3dfd91e84cf903eaa5659feb523934e14f257f5f83107c47795"} Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.713058 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.755223 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume\") pod \"9b805d70-2eba-4e7f-af58-4c60699cc49e\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.755269 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume\") pod \"9b805d70-2eba-4e7f-af58-4c60699cc49e\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.755530 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2ksb\" (UniqueName: \"kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb\") pod \"9b805d70-2eba-4e7f-af58-4c60699cc49e\" (UID: \"9b805d70-2eba-4e7f-af58-4c60699cc49e\") " Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.756159 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b805d70-2eba-4e7f-af58-4c60699cc49e" (UID: "9b805d70-2eba-4e7f-af58-4c60699cc49e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.761074 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb" (OuterVolumeSpecName: "kube-api-access-v2ksb") pod "9b805d70-2eba-4e7f-af58-4c60699cc49e" (UID: "9b805d70-2eba-4e7f-af58-4c60699cc49e"). InnerVolumeSpecName "kube-api-access-v2ksb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.762159 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b805d70-2eba-4e7f-af58-4c60699cc49e" (UID: "9b805d70-2eba-4e7f-af58-4c60699cc49e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.857521 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2ksb\" (UniqueName: \"kubernetes.io/projected/9b805d70-2eba-4e7f-af58-4c60699cc49e-kube-api-access-v2ksb\") on node \"crc\" DevicePath \"\"" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.857555 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b805d70-2eba-4e7f-af58-4c60699cc49e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:45:02 crc kubenswrapper[4955]: I1128 06:45:02.857569 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b805d70-2eba-4e7f-af58-4c60699cc49e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 06:45:03 crc kubenswrapper[4955]: I1128 06:45:03.379439 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" event={"ID":"9b805d70-2eba-4e7f-af58-4c60699cc49e","Type":"ContainerDied","Data":"e52bebfb3cbad3dfd91e84cf903eaa5659feb523934e14f257f5f83107c47795"} Nov 28 06:45:03 crc kubenswrapper[4955]: I1128 06:45:03.379807 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e52bebfb3cbad3dfd91e84cf903eaa5659feb523934e14f257f5f83107c47795" Nov 28 06:45:03 crc kubenswrapper[4955]: I1128 06:45:03.379533 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr" Nov 28 06:45:23 crc kubenswrapper[4955]: I1128 06:45:23.393287 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:45:23 crc kubenswrapper[4955]: I1128 06:45:23.393935 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:45:25 crc kubenswrapper[4955]: I1128 06:45:25.078918 4955 scope.go:117] "RemoveContainer" containerID="48c19e98866543e52c02336738fb6daeff44c92a9a75746616a3858dbadb54d5" Nov 28 06:45:53 crc kubenswrapper[4955]: I1128 06:45:53.393072 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:45:53 crc kubenswrapper[4955]: I1128 06:45:53.394850 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:46:23 crc kubenswrapper[4955]: I1128 06:46:23.393166 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:46:23 crc kubenswrapper[4955]: I1128 06:46:23.393682 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:46:23 crc kubenswrapper[4955]: I1128 06:46:23.393726 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:46:23 crc kubenswrapper[4955]: I1128 06:46:23.394428 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:46:23 crc kubenswrapper[4955]: I1128 06:46:23.394482 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" gracePeriod=600 Nov 28 06:46:23 crc kubenswrapper[4955]: E1128 06:46:23.531592 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:46:24 crc kubenswrapper[4955]: I1128 06:46:24.275384 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" exitCode=0 Nov 28 06:46:24 crc kubenswrapper[4955]: I1128 06:46:24.275480 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476"} Nov 28 06:46:24 crc kubenswrapper[4955]: I1128 06:46:24.275868 4955 scope.go:117] "RemoveContainer" containerID="4aed55a0a733fd5fb3966b873f86b550981c4a573b9c9f3bb84203a8e1648584" Nov 28 06:46:24 crc kubenswrapper[4955]: I1128 06:46:24.277319 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:46:24 crc kubenswrapper[4955]: E1128 06:46:24.278561 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.302292 4955 scope.go:117] "RemoveContainer" containerID="7663f57d4616fae3d2c0db5f9458e0b890b81d158ab39247efa411e6cf3e2be2" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.330346 4955 scope.go:117] "RemoveContainer" containerID="21b63b57ac9ddee1568311525bddd9e857e644749c800af97ab513efbeee524a" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.351572 4955 scope.go:117] "RemoveContainer" containerID="f89611ce3bda246149ac6c272b666e22c658dd19303f4af902bb286f617b7092" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.373437 4955 scope.go:117] "RemoveContainer" containerID="9989fc09f844dc95050233e5333c9f54ec3ab1c2e0968b70db20aa5611b07e56" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.393421 4955 scope.go:117] "RemoveContainer" containerID="e17470f6dab2e167f7f7013223651e42e2e959bec2ae84c35b28ac4b3b12b575" Nov 28 06:46:25 crc kubenswrapper[4955]: I1128 06:46:25.423408 4955 scope.go:117] "RemoveContainer" containerID="1712c9c1d6ab3ff40c3ff583338abb2ae2d9ed67e47719cad0c670dc0fac4be4" Nov 28 06:46:28 crc kubenswrapper[4955]: I1128 06:46:28.334452 4955 generic.go:334] "Generic (PLEG): container finished" podID="be0906bb-475c-4229-9a9f-9a5361e6172e" containerID="8033cbc14d1b2653e9178010e618283713a20044717b0d33a2340fc53c0c0672" exitCode=0 Nov 28 06:46:28 crc kubenswrapper[4955]: I1128 06:46:28.334547 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" event={"ID":"be0906bb-475c-4229-9a9f-9a5361e6172e","Type":"ContainerDied","Data":"8033cbc14d1b2653e9178010e618283713a20044717b0d33a2340fc53c0c0672"} Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.834574 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.898287 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key\") pod \"be0906bb-475c-4229-9a9f-9a5361e6172e\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.898486 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjfw\" (UniqueName: \"kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw\") pod \"be0906bb-475c-4229-9a9f-9a5361e6172e\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.898541 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle\") pod \"be0906bb-475c-4229-9a9f-9a5361e6172e\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.898678 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory\") pod \"be0906bb-475c-4229-9a9f-9a5361e6172e\" (UID: \"be0906bb-475c-4229-9a9f-9a5361e6172e\") " Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.902955 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "be0906bb-475c-4229-9a9f-9a5361e6172e" (UID: "be0906bb-475c-4229-9a9f-9a5361e6172e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.903313 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw" (OuterVolumeSpecName: "kube-api-access-chjfw") pod "be0906bb-475c-4229-9a9f-9a5361e6172e" (UID: "be0906bb-475c-4229-9a9f-9a5361e6172e"). InnerVolumeSpecName "kube-api-access-chjfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.945957 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be0906bb-475c-4229-9a9f-9a5361e6172e" (UID: "be0906bb-475c-4229-9a9f-9a5361e6172e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:46:29 crc kubenswrapper[4955]: I1128 06:46:29.946742 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory" (OuterVolumeSpecName: "inventory") pod "be0906bb-475c-4229-9a9f-9a5361e6172e" (UID: "be0906bb-475c-4229-9a9f-9a5361e6172e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.000952 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.000999 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.001011 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjfw\" (UniqueName: \"kubernetes.io/projected/be0906bb-475c-4229-9a9f-9a5361e6172e-kube-api-access-chjfw\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.001025 4955 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0906bb-475c-4229-9a9f-9a5361e6172e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.359313 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" event={"ID":"be0906bb-475c-4229-9a9f-9a5361e6172e","Type":"ContainerDied","Data":"75b71198a684816581edee7ac41909aa377d7ba165ef30a79a2a880e2e12ad5b"} Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.359374 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75b71198a684816581edee7ac41909aa377d7ba165ef30a79a2a880e2e12ad5b" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.359448 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.492715 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq"] Nov 28 06:46:30 crc kubenswrapper[4955]: E1128 06:46:30.509890 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b805d70-2eba-4e7f-af58-4c60699cc49e" containerName="collect-profiles" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.509922 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b805d70-2eba-4e7f-af58-4c60699cc49e" containerName="collect-profiles" Nov 28 06:46:30 crc kubenswrapper[4955]: E1128 06:46:30.509953 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0906bb-475c-4229-9a9f-9a5361e6172e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.509961 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0906bb-475c-4229-9a9f-9a5361e6172e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.513493 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b805d70-2eba-4e7f-af58-4c60699cc49e" containerName="collect-profiles" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.513579 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0906bb-475c-4229-9a9f-9a5361e6172e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.514895 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.515405 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq"] Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.516879 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.517049 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.517204 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.517305 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.616387 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.616484 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.616590 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vnh6\" (UniqueName: \"kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.718249 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.718365 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vnh6\" (UniqueName: \"kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.718634 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.729726 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.730021 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.742885 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vnh6\" (UniqueName: \"kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:30 crc kubenswrapper[4955]: I1128 06:46:30.838795 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:46:31 crc kubenswrapper[4955]: I1128 06:46:31.400054 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq"] Nov 28 06:46:32 crc kubenswrapper[4955]: I1128 06:46:32.389019 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" event={"ID":"40e141ea-e10b-4e62-a075-da26dee75286","Type":"ContainerStarted","Data":"9bd2df066836ba0764091cf8f99f3b7082da4fa0ccf0f589c0c830b3a8ca1473"} Nov 28 06:46:32 crc kubenswrapper[4955]: I1128 06:46:32.389354 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" event={"ID":"40e141ea-e10b-4e62-a075-da26dee75286","Type":"ContainerStarted","Data":"b4965bcaa155355adca87e2544a8a95383e365bdea222576877ec31c2ed5ed39"} Nov 28 06:46:32 crc kubenswrapper[4955]: I1128 06:46:32.415004 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" podStartSLOduration=1.811750602 podStartE2EDuration="2.414986687s" podCreationTimestamp="2025-11-28 06:46:30 +0000 UTC" firstStartedPulling="2025-11-28 06:46:31.410598309 +0000 UTC m=+1513.999853879" lastFinishedPulling="2025-11-28 06:46:32.013834384 +0000 UTC m=+1514.603089964" observedRunningTime="2025-11-28 06:46:32.410721106 +0000 UTC m=+1514.999976696" watchObservedRunningTime="2025-11-28 06:46:32.414986687 +0000 UTC m=+1515.004242257" Nov 28 06:46:38 crc kubenswrapper[4955]: I1128 06:46:38.705160 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:46:38 crc kubenswrapper[4955]: E1128 06:46:38.705935 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.152315 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.155740 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.197787 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.291857 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.292115 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.292246 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwpvn\" (UniqueName: \"kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.393823 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.394229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.394269 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.394276 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwpvn\" (UniqueName: \"kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.394819 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.421830 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwpvn\" (UniqueName: \"kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn\") pod \"redhat-marketplace-n4qmk\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.496138 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:43 crc kubenswrapper[4955]: I1128 06:46:43.970615 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:44 crc kubenswrapper[4955]: I1128 06:46:44.507742 4955 generic.go:334] "Generic (PLEG): container finished" podID="458e136e-9ae7-4c86-988b-497b932746bd" containerID="6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d" exitCode=0 Nov 28 06:46:44 crc kubenswrapper[4955]: I1128 06:46:44.507826 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerDied","Data":"6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d"} Nov 28 06:46:44 crc kubenswrapper[4955]: I1128 06:46:44.507867 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerStarted","Data":"51776bdb3fbdee5741870da4ead2012374b7f2c873ebbec25cb4fb951f0cc95a"} Nov 28 06:46:46 crc kubenswrapper[4955]: I1128 06:46:46.531057 4955 generic.go:334] "Generic (PLEG): container finished" podID="458e136e-9ae7-4c86-988b-497b932746bd" containerID="3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5" exitCode=0 Nov 28 06:46:46 crc kubenswrapper[4955]: I1128 06:46:46.531139 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerDied","Data":"3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5"} Nov 28 06:46:47 crc kubenswrapper[4955]: I1128 06:46:47.541280 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerStarted","Data":"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1"} Nov 28 06:46:47 crc kubenswrapper[4955]: I1128 06:46:47.568613 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n4qmk" podStartSLOduration=2.127921555 podStartE2EDuration="4.568594361s" podCreationTimestamp="2025-11-28 06:46:43 +0000 UTC" firstStartedPulling="2025-11-28 06:46:44.509806238 +0000 UTC m=+1527.099061798" lastFinishedPulling="2025-11-28 06:46:46.950479024 +0000 UTC m=+1529.539734604" observedRunningTime="2025-11-28 06:46:47.560383888 +0000 UTC m=+1530.149639468" watchObservedRunningTime="2025-11-28 06:46:47.568594361 +0000 UTC m=+1530.157849931" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.496641 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.496990 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.572048 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.657176 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.704980 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:46:53 crc kubenswrapper[4955]: E1128 06:46:53.705330 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:46:53 crc kubenswrapper[4955]: I1128 06:46:53.813877 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:55 crc kubenswrapper[4955]: I1128 06:46:55.612632 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n4qmk" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="registry-server" containerID="cri-o://e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1" gracePeriod=2 Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.084763 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.168765 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities\") pod \"458e136e-9ae7-4c86-988b-497b932746bd\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.168831 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content\") pod \"458e136e-9ae7-4c86-988b-497b932746bd\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.169002 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwpvn\" (UniqueName: \"kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn\") pod \"458e136e-9ae7-4c86-988b-497b932746bd\" (UID: \"458e136e-9ae7-4c86-988b-497b932746bd\") " Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.169811 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities" (OuterVolumeSpecName: "utilities") pod "458e136e-9ae7-4c86-988b-497b932746bd" (UID: "458e136e-9ae7-4c86-988b-497b932746bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.176807 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn" (OuterVolumeSpecName: "kube-api-access-pwpvn") pod "458e136e-9ae7-4c86-988b-497b932746bd" (UID: "458e136e-9ae7-4c86-988b-497b932746bd"). InnerVolumeSpecName "kube-api-access-pwpvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.186414 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "458e136e-9ae7-4c86-988b-497b932746bd" (UID: "458e136e-9ae7-4c86-988b-497b932746bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.270874 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwpvn\" (UniqueName: \"kubernetes.io/projected/458e136e-9ae7-4c86-988b-497b932746bd-kube-api-access-pwpvn\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.270905 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.270915 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/458e136e-9ae7-4c86-988b-497b932746bd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.631210 4955 generic.go:334] "Generic (PLEG): container finished" podID="458e136e-9ae7-4c86-988b-497b932746bd" containerID="e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1" exitCode=0 Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.631254 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerDied","Data":"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1"} Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.631298 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n4qmk" event={"ID":"458e136e-9ae7-4c86-988b-497b932746bd","Type":"ContainerDied","Data":"51776bdb3fbdee5741870da4ead2012374b7f2c873ebbec25cb4fb951f0cc95a"} Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.631315 4955 scope.go:117] "RemoveContainer" containerID="e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.631338 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n4qmk" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.654877 4955 scope.go:117] "RemoveContainer" containerID="3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.734723 4955 scope.go:117] "RemoveContainer" containerID="6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.785849 4955 scope.go:117] "RemoveContainer" containerID="e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1" Nov 28 06:46:56 crc kubenswrapper[4955]: E1128 06:46:56.788010 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1\": container with ID starting with e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1 not found: ID does not exist" containerID="e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.788058 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1"} err="failed to get container status \"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1\": rpc error: code = NotFound desc = could not find container \"e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1\": container with ID starting with e1d52522d0c722725a2d059a0bce974f93caeb8845be330041fd466a345fe1b1 not found: ID does not exist" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.788091 4955 scope.go:117] "RemoveContainer" containerID="3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5" Nov 28 06:46:56 crc kubenswrapper[4955]: E1128 06:46:56.795189 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5\": container with ID starting with 3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5 not found: ID does not exist" containerID="3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.795231 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5"} err="failed to get container status \"3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5\": rpc error: code = NotFound desc = could not find container \"3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5\": container with ID starting with 3c1fe95b948e626de6df2179ebb735ff35b30d4103750fa96a3199e8084277b5 not found: ID does not exist" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.795260 4955 scope.go:117] "RemoveContainer" containerID="6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d" Nov 28 06:46:56 crc kubenswrapper[4955]: E1128 06:46:56.798924 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d\": container with ID starting with 6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d not found: ID does not exist" containerID="6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.798982 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d"} err="failed to get container status \"6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d\": rpc error: code = NotFound desc = could not find container \"6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d\": container with ID starting with 6f1d274d86d010aa928550d17c876da5e2ca8ad1f438f83b7128b6c0733c743d not found: ID does not exist" Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.805995 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:56 crc kubenswrapper[4955]: I1128 06:46:56.817934 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n4qmk"] Nov 28 06:46:57 crc kubenswrapper[4955]: I1128 06:46:57.714434 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458e136e-9ae7-4c86-988b-497b932746bd" path="/var/lib/kubelet/pods/458e136e-9ae7-4c86-988b-497b932746bd/volumes" Nov 28 06:47:05 crc kubenswrapper[4955]: I1128 06:47:05.704072 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:47:05 crc kubenswrapper[4955]: E1128 06:47:05.705593 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:47:16 crc kubenswrapper[4955]: I1128 06:47:16.704006 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:47:16 crc kubenswrapper[4955]: E1128 06:47:16.704695 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:47:25 crc kubenswrapper[4955]: I1128 06:47:25.484311 4955 scope.go:117] "RemoveContainer" containerID="a5b9cbd98792fd41eaa48ca2272d1f94b24c00ab056c48e97309f0014e39e28e" Nov 28 06:47:25 crc kubenswrapper[4955]: I1128 06:47:25.506744 4955 scope.go:117] "RemoveContainer" containerID="01b04c868eacb805b05d8fd55275cd2af4e8e5e60540db128aff93bac204f619" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.674605 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:27 crc kubenswrapper[4955]: E1128 06:47:27.675404 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="extract-content" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.675422 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="extract-content" Nov 28 06:47:27 crc kubenswrapper[4955]: E1128 06:47:27.675444 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="extract-utilities" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.675452 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="extract-utilities" Nov 28 06:47:27 crc kubenswrapper[4955]: E1128 06:47:27.675492 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="registry-server" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.675570 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="registry-server" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.675796 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="458e136e-9ae7-4c86-988b-497b932746bd" containerName="registry-server" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.677561 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.691370 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.761162 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwxmt\" (UniqueName: \"kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.761213 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.761643 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.865202 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwxmt\" (UniqueName: \"kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.865842 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.866155 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.866891 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.867012 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:27 crc kubenswrapper[4955]: I1128 06:47:27.887595 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwxmt\" (UniqueName: \"kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt\") pod \"community-operators-fb48f\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:28 crc kubenswrapper[4955]: I1128 06:47:28.032854 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:28 crc kubenswrapper[4955]: I1128 06:47:28.633858 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:29 crc kubenswrapper[4955]: I1128 06:47:29.000428 4955 generic.go:334] "Generic (PLEG): container finished" podID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerID="8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122" exitCode=0 Nov 28 06:47:29 crc kubenswrapper[4955]: I1128 06:47:29.000619 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerDied","Data":"8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122"} Nov 28 06:47:29 crc kubenswrapper[4955]: I1128 06:47:29.000720 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerStarted","Data":"0f8835c73b6571e3627c0fee7f7e0f016625df004194e9cd98ab594a80ab1107"} Nov 28 06:47:29 crc kubenswrapper[4955]: I1128 06:47:29.705042 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:47:29 crc kubenswrapper[4955]: E1128 06:47:29.705334 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:47:30 crc kubenswrapper[4955]: I1128 06:47:30.012101 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerStarted","Data":"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90"} Nov 28 06:47:31 crc kubenswrapper[4955]: I1128 06:47:31.025879 4955 generic.go:334] "Generic (PLEG): container finished" podID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerID="4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90" exitCode=0 Nov 28 06:47:31 crc kubenswrapper[4955]: I1128 06:47:31.025935 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerDied","Data":"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90"} Nov 28 06:47:32 crc kubenswrapper[4955]: I1128 06:47:32.041363 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerStarted","Data":"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d"} Nov 28 06:47:32 crc kubenswrapper[4955]: I1128 06:47:32.084733 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fb48f" podStartSLOduration=2.470691299 podStartE2EDuration="5.084704066s" podCreationTimestamp="2025-11-28 06:47:27 +0000 UTC" firstStartedPulling="2025-11-28 06:47:29.002701797 +0000 UTC m=+1571.591957367" lastFinishedPulling="2025-11-28 06:47:31.616714554 +0000 UTC m=+1574.205970134" observedRunningTime="2025-11-28 06:47:32.072352587 +0000 UTC m=+1574.661608207" watchObservedRunningTime="2025-11-28 06:47:32.084704066 +0000 UTC m=+1574.673959676" Nov 28 06:47:38 crc kubenswrapper[4955]: I1128 06:47:38.033701 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:38 crc kubenswrapper[4955]: I1128 06:47:38.034153 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:38 crc kubenswrapper[4955]: I1128 06:47:38.117909 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:38 crc kubenswrapper[4955]: I1128 06:47:38.229824 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:38 crc kubenswrapper[4955]: I1128 06:47:38.358684 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.120169 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fb48f" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="registry-server" containerID="cri-o://9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d" gracePeriod=2 Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.559116 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.616690 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities\") pod \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.616776 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content\") pod \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.617031 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwxmt\" (UniqueName: \"kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt\") pod \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\" (UID: \"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e\") " Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.617560 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities" (OuterVolumeSpecName: "utilities") pod "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" (UID: "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.624187 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt" (OuterVolumeSpecName: "kube-api-access-zwxmt") pod "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" (UID: "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e"). InnerVolumeSpecName "kube-api-access-zwxmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.676820 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" (UID: "3ee1b8b1-bf37-4050-8db2-3347a2f95c8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.719497 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwxmt\" (UniqueName: \"kubernetes.io/projected/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-kube-api-access-zwxmt\") on node \"crc\" DevicePath \"\"" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.719570 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:47:40 crc kubenswrapper[4955]: I1128 06:47:40.719589 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.134830 4955 generic.go:334] "Generic (PLEG): container finished" podID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerID="9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d" exitCode=0 Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.134886 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fb48f" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.134888 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerDied","Data":"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d"} Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.135054 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fb48f" event={"ID":"3ee1b8b1-bf37-4050-8db2-3347a2f95c8e","Type":"ContainerDied","Data":"0f8835c73b6571e3627c0fee7f7e0f016625df004194e9cd98ab594a80ab1107"} Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.135570 4955 scope.go:117] "RemoveContainer" containerID="9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.160648 4955 scope.go:117] "RemoveContainer" containerID="4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.174519 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.186071 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fb48f"] Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.198526 4955 scope.go:117] "RemoveContainer" containerID="8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.250293 4955 scope.go:117] "RemoveContainer" containerID="9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d" Nov 28 06:47:41 crc kubenswrapper[4955]: E1128 06:47:41.250891 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d\": container with ID starting with 9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d not found: ID does not exist" containerID="9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.250926 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d"} err="failed to get container status \"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d\": rpc error: code = NotFound desc = could not find container \"9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d\": container with ID starting with 9d2513dea45f7f452e373db9cb6797649de9667c3121a79420280db8692a251d not found: ID does not exist" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.250952 4955 scope.go:117] "RemoveContainer" containerID="4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90" Nov 28 06:47:41 crc kubenswrapper[4955]: E1128 06:47:41.251269 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90\": container with ID starting with 4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90 not found: ID does not exist" containerID="4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.251285 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90"} err="failed to get container status \"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90\": rpc error: code = NotFound desc = could not find container \"4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90\": container with ID starting with 4474419afa3622219ffbbe383a69488c89c1554792f0c0fb6d28f848ca468d90 not found: ID does not exist" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.251299 4955 scope.go:117] "RemoveContainer" containerID="8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122" Nov 28 06:47:41 crc kubenswrapper[4955]: E1128 06:47:41.251867 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122\": container with ID starting with 8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122 not found: ID does not exist" containerID="8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.251884 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122"} err="failed to get container status \"8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122\": rpc error: code = NotFound desc = could not find container \"8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122\": container with ID starting with 8f729e909fbd700387eb4f94e9e9fdf4fc628ae7974a2cb154da74ad0ebe2122 not found: ID does not exist" Nov 28 06:47:41 crc kubenswrapper[4955]: I1128 06:47:41.718242 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" path="/var/lib/kubelet/pods/3ee1b8b1-bf37-4050-8db2-3347a2f95c8e/volumes" Nov 28 06:47:43 crc kubenswrapper[4955]: I1128 06:47:43.705246 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:47:43 crc kubenswrapper[4955]: E1128 06:47:43.706022 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.051790 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:47:55 crc kubenswrapper[4955]: E1128 06:47:55.052845 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="registry-server" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.052857 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="registry-server" Nov 28 06:47:55 crc kubenswrapper[4955]: E1128 06:47:55.052868 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="extract-utilities" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.052874 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="extract-utilities" Nov 28 06:47:55 crc kubenswrapper[4955]: E1128 06:47:55.052893 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="extract-content" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.052901 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="extract-content" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.053084 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee1b8b1-bf37-4050-8db2-3347a2f95c8e" containerName="registry-server" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.054418 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.071411 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.109106 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.109566 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.109637 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-556rs\" (UniqueName: \"kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.211229 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.211342 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-556rs\" (UniqueName: \"kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.211381 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.211831 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.212110 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.233245 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-556rs\" (UniqueName: \"kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs\") pod \"redhat-operators-vf4cq\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.425172 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:47:55 crc kubenswrapper[4955]: I1128 06:47:55.892134 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:47:56 crc kubenswrapper[4955]: I1128 06:47:56.309913 4955 generic.go:334] "Generic (PLEG): container finished" podID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerID="c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd" exitCode=0 Nov 28 06:47:56 crc kubenswrapper[4955]: I1128 06:47:56.310930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerDied","Data":"c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd"} Nov 28 06:47:56 crc kubenswrapper[4955]: I1128 06:47:56.311471 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerStarted","Data":"d901f38b2a5125b8164971e78255229b2044047bfedf7ec0574b7e656e3d921c"} Nov 28 06:47:57 crc kubenswrapper[4955]: I1128 06:47:57.322267 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerStarted","Data":"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765"} Nov 28 06:47:58 crc kubenswrapper[4955]: I1128 06:47:58.360811 4955 generic.go:334] "Generic (PLEG): container finished" podID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerID="a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765" exitCode=0 Nov 28 06:47:58 crc kubenswrapper[4955]: I1128 06:47:58.360874 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerDied","Data":"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765"} Nov 28 06:47:58 crc kubenswrapper[4955]: I1128 06:47:58.705331 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:47:58 crc kubenswrapper[4955]: E1128 06:47:58.705842 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:48:03 crc kubenswrapper[4955]: I1128 06:48:03.418600 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerStarted","Data":"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4"} Nov 28 06:48:03 crc kubenswrapper[4955]: I1128 06:48:03.440061 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vf4cq" podStartSLOduration=2.410451016 podStartE2EDuration="8.440043123s" podCreationTimestamp="2025-11-28 06:47:55 +0000 UTC" firstStartedPulling="2025-11-28 06:47:56.311977128 +0000 UTC m=+1598.901232688" lastFinishedPulling="2025-11-28 06:48:02.341569225 +0000 UTC m=+1604.930824795" observedRunningTime="2025-11-28 06:48:03.435843964 +0000 UTC m=+1606.025099554" watchObservedRunningTime="2025-11-28 06:48:03.440043123 +0000 UTC m=+1606.029298713" Nov 28 06:48:05 crc kubenswrapper[4955]: I1128 06:48:05.425859 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:05 crc kubenswrapper[4955]: I1128 06:48:05.426169 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:06 crc kubenswrapper[4955]: I1128 06:48:06.479024 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vf4cq" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="registry-server" probeResult="failure" output=< Nov 28 06:48:06 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 06:48:06 crc kubenswrapper[4955]: > Nov 28 06:48:13 crc kubenswrapper[4955]: I1128 06:48:13.704456 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:48:13 crc kubenswrapper[4955]: E1128 06:48:13.705225 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.050571 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kb8tx"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.066800 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ddf5-account-create-update-5lqm2"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.094120 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-97bc-account-create-update-tkw67"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.106789 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kb8tx"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.118502 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-97bc-account-create-update-tkw67"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.129202 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ddf5-account-create-update-5lqm2"] Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.505710 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.584237 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.719477 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301210c2-d75a-4798-bca8-3a431ba13279" path="/var/lib/kubelet/pods/301210c2-d75a-4798-bca8-3a431ba13279/volumes" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.721007 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f65108f-7b8b-41af-a2b6-d1893982504f" path="/var/lib/kubelet/pods/7f65108f-7b8b-41af-a2b6-d1893982504f/volumes" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.722314 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e0ef99-b7f7-401a-a834-d0581bfd39e5" path="/var/lib/kubelet/pods/b9e0ef99-b7f7-401a-a834-d0581bfd39e5/volumes" Nov 28 06:48:15 crc kubenswrapper[4955]: I1128 06:48:15.765000 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:48:16 crc kubenswrapper[4955]: I1128 06:48:16.049245 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xzbsd"] Nov 28 06:48:16 crc kubenswrapper[4955]: I1128 06:48:16.060542 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xzbsd"] Nov 28 06:48:16 crc kubenswrapper[4955]: I1128 06:48:16.548825 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vf4cq" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="registry-server" containerID="cri-o://400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4" gracePeriod=2 Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.066944 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.196143 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities\") pod \"61a7a8f6-1767-4689-84bb-af26743ff9d4\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.196279 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-556rs\" (UniqueName: \"kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs\") pod \"61a7a8f6-1767-4689-84bb-af26743ff9d4\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.196791 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content\") pod \"61a7a8f6-1767-4689-84bb-af26743ff9d4\" (UID: \"61a7a8f6-1767-4689-84bb-af26743ff9d4\") " Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.197111 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities" (OuterVolumeSpecName: "utilities") pod "61a7a8f6-1767-4689-84bb-af26743ff9d4" (UID: "61a7a8f6-1767-4689-84bb-af26743ff9d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.197673 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.202921 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs" (OuterVolumeSpecName: "kube-api-access-556rs") pod "61a7a8f6-1767-4689-84bb-af26743ff9d4" (UID: "61a7a8f6-1767-4689-84bb-af26743ff9d4"). InnerVolumeSpecName "kube-api-access-556rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.298128 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a7a8f6-1767-4689-84bb-af26743ff9d4" (UID: "61a7a8f6-1767-4689-84bb-af26743ff9d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.301339 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7a8f6-1767-4689-84bb-af26743ff9d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.301394 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-556rs\" (UniqueName: \"kubernetes.io/projected/61a7a8f6-1767-4689-84bb-af26743ff9d4-kube-api-access-556rs\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.560097 4955 generic.go:334] "Generic (PLEG): container finished" podID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerID="400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4" exitCode=0 Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.560127 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vf4cq" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.560141 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerDied","Data":"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4"} Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.560820 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vf4cq" event={"ID":"61a7a8f6-1767-4689-84bb-af26743ff9d4","Type":"ContainerDied","Data":"d901f38b2a5125b8164971e78255229b2044047bfedf7ec0574b7e656e3d921c"} Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.560922 4955 scope.go:117] "RemoveContainer" containerID="400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.613610 4955 scope.go:117] "RemoveContainer" containerID="a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.621891 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.637719 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vf4cq"] Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.639999 4955 scope.go:117] "RemoveContainer" containerID="c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.672730 4955 scope.go:117] "RemoveContainer" containerID="400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4" Nov 28 06:48:17 crc kubenswrapper[4955]: E1128 06:48:17.673190 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4\": container with ID starting with 400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4 not found: ID does not exist" containerID="400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.673270 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4"} err="failed to get container status \"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4\": rpc error: code = NotFound desc = could not find container \"400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4\": container with ID starting with 400d749761af7d01ba59a7cbc198e963d1fcb3bccdaed4d9d687b7ac08321ea4 not found: ID does not exist" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.673338 4955 scope.go:117] "RemoveContainer" containerID="a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765" Nov 28 06:48:17 crc kubenswrapper[4955]: E1128 06:48:17.673659 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765\": container with ID starting with a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765 not found: ID does not exist" containerID="a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.673701 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765"} err="failed to get container status \"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765\": rpc error: code = NotFound desc = could not find container \"a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765\": container with ID starting with a598bc40c8ac8c188257b7ea94e002892968ede71713a1009e73eb6fb2022765 not found: ID does not exist" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.673731 4955 scope.go:117] "RemoveContainer" containerID="c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd" Nov 28 06:48:17 crc kubenswrapper[4955]: E1128 06:48:17.673967 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd\": container with ID starting with c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd not found: ID does not exist" containerID="c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.674036 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd"} err="failed to get container status \"c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd\": rpc error: code = NotFound desc = could not find container \"c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd\": container with ID starting with c01b28c999a9c05bd304d929c4d561ba4fab25d1899352b8421ed5df297588dd not found: ID does not exist" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.720570 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0983c5c7-3f49-4ac0-b096-31df44191680" path="/var/lib/kubelet/pods/0983c5c7-3f49-4ac0-b096-31df44191680/volumes" Nov 28 06:48:17 crc kubenswrapper[4955]: I1128 06:48:17.721802 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" path="/var/lib/kubelet/pods/61a7a8f6-1767-4689-84bb-af26743ff9d4/volumes" Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.048406 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lc5bw"] Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.063165 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-06a9-account-create-update-x2rxg"] Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.073244 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lc5bw"] Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.082768 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-06a9-account-create-update-x2rxg"] Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.601355 4955 generic.go:334] "Generic (PLEG): container finished" podID="40e141ea-e10b-4e62-a075-da26dee75286" containerID="9bd2df066836ba0764091cf8f99f3b7082da4fa0ccf0f589c0c830b3a8ca1473" exitCode=0 Nov 28 06:48:20 crc kubenswrapper[4955]: I1128 06:48:20.601415 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" event={"ID":"40e141ea-e10b-4e62-a075-da26dee75286","Type":"ContainerDied","Data":"9bd2df066836ba0764091cf8f99f3b7082da4fa0ccf0f589c0c830b3a8ca1473"} Nov 28 06:48:21 crc kubenswrapper[4955]: I1128 06:48:21.728405 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3e480c-0b9f-4b17-904a-4fd047194f99" path="/var/lib/kubelet/pods/7e3e480c-0b9f-4b17-904a-4fd047194f99/volumes" Nov 28 06:48:21 crc kubenswrapper[4955]: I1128 06:48:21.729967 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a218d231-afdd-433f-9ce9-a8b50e3b3631" path="/var/lib/kubelet/pods/a218d231-afdd-433f-9ce9-a8b50e3b3631/volumes" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.114454 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.311083 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key\") pod \"40e141ea-e10b-4e62-a075-da26dee75286\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.311543 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory\") pod \"40e141ea-e10b-4e62-a075-da26dee75286\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.311708 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vnh6\" (UniqueName: \"kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6\") pod \"40e141ea-e10b-4e62-a075-da26dee75286\" (UID: \"40e141ea-e10b-4e62-a075-da26dee75286\") " Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.317216 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6" (OuterVolumeSpecName: "kube-api-access-6vnh6") pod "40e141ea-e10b-4e62-a075-da26dee75286" (UID: "40e141ea-e10b-4e62-a075-da26dee75286"). InnerVolumeSpecName "kube-api-access-6vnh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.360747 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40e141ea-e10b-4e62-a075-da26dee75286" (UID: "40e141ea-e10b-4e62-a075-da26dee75286"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.368263 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory" (OuterVolumeSpecName: "inventory") pod "40e141ea-e10b-4e62-a075-da26dee75286" (UID: "40e141ea-e10b-4e62-a075-da26dee75286"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.414650 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vnh6\" (UniqueName: \"kubernetes.io/projected/40e141ea-e10b-4e62-a075-da26dee75286-kube-api-access-6vnh6\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.414699 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.414718 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40e141ea-e10b-4e62-a075-da26dee75286-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.628657 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" event={"ID":"40e141ea-e10b-4e62-a075-da26dee75286","Type":"ContainerDied","Data":"b4965bcaa155355adca87e2544a8a95383e365bdea222576877ec31c2ed5ed39"} Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.628704 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4965bcaa155355adca87e2544a8a95383e365bdea222576877ec31c2ed5ed39" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.628747 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.728661 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k"] Nov 28 06:48:22 crc kubenswrapper[4955]: E1128 06:48:22.729099 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e141ea-e10b-4e62-a075-da26dee75286" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729123 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e141ea-e10b-4e62-a075-da26dee75286" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 06:48:22 crc kubenswrapper[4955]: E1128 06:48:22.729150 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="extract-content" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729158 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="extract-content" Nov 28 06:48:22 crc kubenswrapper[4955]: E1128 06:48:22.729179 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="extract-utilities" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729187 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="extract-utilities" Nov 28 06:48:22 crc kubenswrapper[4955]: E1128 06:48:22.729209 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="registry-server" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729257 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="registry-server" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729534 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e141ea-e10b-4e62-a075-da26dee75286" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.729551 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7a8f6-1767-4689-84bb-af26743ff9d4" containerName="registry-server" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.730289 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.734496 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.734582 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.734868 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.735201 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.749343 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k"] Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.924137 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.924322 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:22 crc kubenswrapper[4955]: I1128 06:48:22.924498 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbps7\" (UniqueName: \"kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.026039 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.026149 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.026209 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbps7\" (UniqueName: \"kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.031073 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.032089 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.060622 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbps7\" (UniqueName: \"kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b567k\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.352625 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.771211 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k"] Nov 28 06:48:23 crc kubenswrapper[4955]: I1128 06:48:23.774456 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:48:24 crc kubenswrapper[4955]: I1128 06:48:24.653365 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" event={"ID":"3efdabfd-7ad3-4586-8398-97512113e085","Type":"ContainerStarted","Data":"1ee1a8e7390e274c3ed5b7ffebc6eba32a8f5c19cba3ef6e7613e254d72dad85"} Nov 28 06:48:24 crc kubenswrapper[4955]: I1128 06:48:24.653865 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" event={"ID":"3efdabfd-7ad3-4586-8398-97512113e085","Type":"ContainerStarted","Data":"ab8816079c3af07ed9b0da4a857b92b2c7057e1034e7086299352aaddc62480c"} Nov 28 06:48:24 crc kubenswrapper[4955]: I1128 06:48:24.670550 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" podStartSLOduration=2.159031692 podStartE2EDuration="2.670487292s" podCreationTimestamp="2025-11-28 06:48:22 +0000 UTC" firstStartedPulling="2025-11-28 06:48:23.774265223 +0000 UTC m=+1626.363520793" lastFinishedPulling="2025-11-28 06:48:24.285720813 +0000 UTC m=+1626.874976393" observedRunningTime="2025-11-28 06:48:24.669537895 +0000 UTC m=+1627.258793505" watchObservedRunningTime="2025-11-28 06:48:24.670487292 +0000 UTC m=+1627.259742902" Nov 28 06:48:24 crc kubenswrapper[4955]: I1128 06:48:24.705179 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:48:24 crc kubenswrapper[4955]: E1128 06:48:24.705612 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.602584 4955 scope.go:117] "RemoveContainer" containerID="44403e446c9bb229c16c6b58bd407cd20b18f4ad3626ed23c9510fb0c2985533" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.651628 4955 scope.go:117] "RemoveContainer" containerID="c845ba95fa41c18934ea5deb20e72e4992772baf01f1c0b2cc55aa4e8e59401e" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.720157 4955 scope.go:117] "RemoveContainer" containerID="d22e93b6f0552960fcf92e5666ee86a336f2505165d71bfc002d1bf8c6dde707" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.769479 4955 scope.go:117] "RemoveContainer" containerID="a87e624e918f15aed1e173f4e7cdaaead50289f2423da5c14e210fd29c53bf7e" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.808839 4955 scope.go:117] "RemoveContainer" containerID="697cc3dce80600b8ffd76dc0d63640ce79c93d8fcf145090e80a9be5ad06027a" Nov 28 06:48:25 crc kubenswrapper[4955]: I1128 06:48:25.848935 4955 scope.go:117] "RemoveContainer" containerID="18ee145313623ca08cedfa3bc78566493cc1792cffdddfc68d998387d0c09880" Nov 28 06:48:38 crc kubenswrapper[4955]: I1128 06:48:38.709706 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:48:38 crc kubenswrapper[4955]: E1128 06:48:38.710701 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.030839 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-8z9v4"] Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.039043 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7wwdm"] Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.046709 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9451-account-create-update-5f989"] Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.055030 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7wwdm"] Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.062497 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-8z9v4"] Nov 28 06:48:42 crc kubenswrapper[4955]: I1128 06:48:42.069826 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9451-account-create-update-5f989"] Nov 28 06:48:43 crc kubenswrapper[4955]: I1128 06:48:43.722970 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebf5806-8556-44f9-8aaa-6dc42411d41a" path="/var/lib/kubelet/pods/6ebf5806-8556-44f9-8aaa-6dc42411d41a/volumes" Nov 28 06:48:43 crc kubenswrapper[4955]: I1128 06:48:43.724984 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce267f10-88a5-4963-82f3-2bf40a69d1f5" path="/var/lib/kubelet/pods/ce267f10-88a5-4963-82f3-2bf40a69d1f5/volumes" Nov 28 06:48:43 crc kubenswrapper[4955]: I1128 06:48:43.726374 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a74ed3-3caa-473a-8397-88c67b97775f" path="/var/lib/kubelet/pods/e6a74ed3-3caa-473a-8397-88c67b97775f/volumes" Nov 28 06:48:48 crc kubenswrapper[4955]: I1128 06:48:48.067473 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-t5x9j"] Nov 28 06:48:48 crc kubenswrapper[4955]: I1128 06:48:48.082461 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-t5x9j"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.043667 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tsnf2"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.052420 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e755-account-create-update-k8drh"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.065064 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tsnf2"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.073320 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-46f3-account-create-update-b4qrc"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.089250 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-46f3-account-create-update-b4qrc"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.116256 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e755-account-create-update-k8drh"] Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.719860 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6185b77e-1d4a-4e4c-9bca-f322a2339ee0" path="/var/lib/kubelet/pods/6185b77e-1d4a-4e4c-9bca-f322a2339ee0/volumes" Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.723655 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eaf9231-c7bd-4a41-b9b9-2370274a779b" path="/var/lib/kubelet/pods/6eaf9231-c7bd-4a41-b9b9-2370274a779b/volumes" Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.725191 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf7bffc4-2591-486f-ac91-07aa7b2e8c30" path="/var/lib/kubelet/pods/bf7bffc4-2591-486f-ac91-07aa7b2e8c30/volumes" Nov 28 06:48:49 crc kubenswrapper[4955]: I1128 06:48:49.726809 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65ca1af-dba4-4c0d-80ba-31b1f15957c3" path="/var/lib/kubelet/pods/f65ca1af-dba4-4c0d-80ba-31b1f15957c3/volumes" Nov 28 06:48:50 crc kubenswrapper[4955]: I1128 06:48:50.704626 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:48:50 crc kubenswrapper[4955]: E1128 06:48:50.705157 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:48:54 crc kubenswrapper[4955]: I1128 06:48:54.036759 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-l9zcs"] Nov 28 06:48:54 crc kubenswrapper[4955]: I1128 06:48:54.044858 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-l9zcs"] Nov 28 06:48:55 crc kubenswrapper[4955]: I1128 06:48:55.717389 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97d232b-3a4f-4080-9943-b9e2c61b3d44" path="/var/lib/kubelet/pods/e97d232b-3a4f-4080-9943-b9e2c61b3d44/volumes" Nov 28 06:49:01 crc kubenswrapper[4955]: I1128 06:49:01.704261 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:49:01 crc kubenswrapper[4955]: E1128 06:49:01.704898 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:49:13 crc kubenswrapper[4955]: I1128 06:49:13.704389 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:49:13 crc kubenswrapper[4955]: E1128 06:49:13.705307 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:49:22 crc kubenswrapper[4955]: I1128 06:49:22.044681 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ql7bs"] Nov 28 06:49:22 crc kubenswrapper[4955]: I1128 06:49:22.052623 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ql7bs"] Nov 28 06:49:23 crc kubenswrapper[4955]: I1128 06:49:23.716201 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d0bb158-ce32-468c-a2cc-b99759e19390" path="/var/lib/kubelet/pods/2d0bb158-ce32-468c-a2cc-b99759e19390/volumes" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.043801 4955 scope.go:117] "RemoveContainer" containerID="4a4c047de2c383fafd3374e492eb19bcbaf1436edf5fef0b5a7caf0df50c0d48" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.076631 4955 scope.go:117] "RemoveContainer" containerID="36080a6c2fd791b619a467a004bacf0992e25930205756bd4f544b4c2086a005" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.167223 4955 scope.go:117] "RemoveContainer" containerID="3e46318759576b68895285af0db24913171a9bf8d6f917f080b5a24b0f0ed32f" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.244311 4955 scope.go:117] "RemoveContainer" containerID="1fc8347d67bcaf4e2aebeacf77f556747887a3ae28a2cb63eee041abda3093fc" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.283445 4955 scope.go:117] "RemoveContainer" containerID="261b419b76679b0fbc2a00ddaae2de554595059d65f2af59a0e72e9ca8bab205" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.315313 4955 scope.go:117] "RemoveContainer" containerID="549c6ebfc94867f528040dfcd453e180e98d408d9742a005d1a5aff353aa33fd" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.352806 4955 scope.go:117] "RemoveContainer" containerID="f6d74f5b7d968955626d79434fe8f998000e145da80ab1b7ddb59eeec178b4e2" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.398204 4955 scope.go:117] "RemoveContainer" containerID="a683b62083abcb07c2be99f2b03718e6acb73ab2419a45583890e573ff345358" Nov 28 06:49:26 crc kubenswrapper[4955]: I1128 06:49:26.423237 4955 scope.go:117] "RemoveContainer" containerID="1524f45a8f8cec86c4798ab5c65531d521641dc0d5e6a475af90504c31db328f" Nov 28 06:49:27 crc kubenswrapper[4955]: I1128 06:49:27.713081 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:49:27 crc kubenswrapper[4955]: E1128 06:49:27.713423 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:49:36 crc kubenswrapper[4955]: I1128 06:49:36.049196 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jqvkj"] Nov 28 06:49:36 crc kubenswrapper[4955]: I1128 06:49:36.066015 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jqvkj"] Nov 28 06:49:36 crc kubenswrapper[4955]: I1128 06:49:36.079375 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-44sl9"] Nov 28 06:49:36 crc kubenswrapper[4955]: I1128 06:49:36.089428 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-44sl9"] Nov 28 06:49:37 crc kubenswrapper[4955]: I1128 06:49:37.717207 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a694432-dcc2-45d4-a492-f43f79169fc4" path="/var/lib/kubelet/pods/0a694432-dcc2-45d4-a492-f43f79169fc4/volumes" Nov 28 06:49:37 crc kubenswrapper[4955]: I1128 06:49:37.718325 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddb6574-5273-410a-93aa-5293a16dfeba" path="/var/lib/kubelet/pods/bddb6574-5273-410a-93aa-5293a16dfeba/volumes" Nov 28 06:49:38 crc kubenswrapper[4955]: I1128 06:49:38.705526 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:49:38 crc kubenswrapper[4955]: E1128 06:49:38.705890 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:49:40 crc kubenswrapper[4955]: I1128 06:49:40.685819 4955 generic.go:334] "Generic (PLEG): container finished" podID="3efdabfd-7ad3-4586-8398-97512113e085" containerID="1ee1a8e7390e274c3ed5b7ffebc6eba32a8f5c19cba3ef6e7613e254d72dad85" exitCode=0 Nov 28 06:49:40 crc kubenswrapper[4955]: I1128 06:49:40.685970 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" event={"ID":"3efdabfd-7ad3-4586-8398-97512113e085","Type":"ContainerDied","Data":"1ee1a8e7390e274c3ed5b7ffebc6eba32a8f5c19cba3ef6e7613e254d72dad85"} Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.159810 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.261010 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbps7\" (UniqueName: \"kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7\") pod \"3efdabfd-7ad3-4586-8398-97512113e085\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.261150 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory\") pod \"3efdabfd-7ad3-4586-8398-97512113e085\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.261233 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key\") pod \"3efdabfd-7ad3-4586-8398-97512113e085\" (UID: \"3efdabfd-7ad3-4586-8398-97512113e085\") " Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.270717 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7" (OuterVolumeSpecName: "kube-api-access-cbps7") pod "3efdabfd-7ad3-4586-8398-97512113e085" (UID: "3efdabfd-7ad3-4586-8398-97512113e085"). InnerVolumeSpecName "kube-api-access-cbps7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.295889 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3efdabfd-7ad3-4586-8398-97512113e085" (UID: "3efdabfd-7ad3-4586-8398-97512113e085"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.316587 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory" (OuterVolumeSpecName: "inventory") pod "3efdabfd-7ad3-4586-8398-97512113e085" (UID: "3efdabfd-7ad3-4586-8398-97512113e085"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.364012 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbps7\" (UniqueName: \"kubernetes.io/projected/3efdabfd-7ad3-4586-8398-97512113e085-kube-api-access-cbps7\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.364261 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.364377 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3efdabfd-7ad3-4586-8398-97512113e085-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.712950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" event={"ID":"3efdabfd-7ad3-4586-8398-97512113e085","Type":"ContainerDied","Data":"ab8816079c3af07ed9b0da4a857b92b2c7057e1034e7086299352aaddc62480c"} Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.713304 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab8816079c3af07ed9b0da4a857b92b2c7057e1034e7086299352aaddc62480c" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.713402 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b567k" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.840829 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m"] Nov 28 06:49:42 crc kubenswrapper[4955]: E1128 06:49:42.841672 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efdabfd-7ad3-4586-8398-97512113e085" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.841713 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efdabfd-7ad3-4586-8398-97512113e085" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.842202 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efdabfd-7ad3-4586-8398-97512113e085" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.843373 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.847115 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.847456 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.847816 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.848039 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.849498 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m"] Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.979273 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.979537 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwq9t\" (UniqueName: \"kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:42 crc kubenswrapper[4955]: I1128 06:49:42.979565 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.081315 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwq9t\" (UniqueName: \"kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.081379 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.081561 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.086814 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.089366 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.100368 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwq9t\" (UniqueName: \"kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jck8m\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.200681 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:43 crc kubenswrapper[4955]: I1128 06:49:43.775655 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m"] Nov 28 06:49:44 crc kubenswrapper[4955]: I1128 06:49:44.739590 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" event={"ID":"a95ee68c-d5b2-490f-a4e4-33bb8bb56536","Type":"ContainerStarted","Data":"4ff3ce2af4b027af6ddfc437d0831566ae2a9e30b966e71d04a12a5d7da4d9f6"} Nov 28 06:49:44 crc kubenswrapper[4955]: I1128 06:49:44.740973 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" event={"ID":"a95ee68c-d5b2-490f-a4e4-33bb8bb56536","Type":"ContainerStarted","Data":"93939ce8a3fb943dadb9f5326e3c7295de221b1671257a7b5a1c68562f229b61"} Nov 28 06:49:44 crc kubenswrapper[4955]: I1128 06:49:44.755913 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" podStartSLOduration=2.166503291 podStartE2EDuration="2.755897095s" podCreationTimestamp="2025-11-28 06:49:42 +0000 UTC" firstStartedPulling="2025-11-28 06:49:43.792848376 +0000 UTC m=+1706.382103946" lastFinishedPulling="2025-11-28 06:49:44.38224217 +0000 UTC m=+1706.971497750" observedRunningTime="2025-11-28 06:49:44.752060157 +0000 UTC m=+1707.341315737" watchObservedRunningTime="2025-11-28 06:49:44.755897095 +0000 UTC m=+1707.345152665" Nov 28 06:49:46 crc kubenswrapper[4955]: I1128 06:49:46.056658 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-h7lw7"] Nov 28 06:49:46 crc kubenswrapper[4955]: I1128 06:49:46.065650 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-h7lw7"] Nov 28 06:49:47 crc kubenswrapper[4955]: I1128 06:49:47.038544 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-d4bdh"] Nov 28 06:49:47 crc kubenswrapper[4955]: I1128 06:49:47.049402 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-d4bdh"] Nov 28 06:49:47 crc kubenswrapper[4955]: I1128 06:49:47.715371 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ffeda94-da23-484b-b623-fe3101c66890" path="/var/lib/kubelet/pods/0ffeda94-da23-484b-b623-fe3101c66890/volumes" Nov 28 06:49:47 crc kubenswrapper[4955]: I1128 06:49:47.716335 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c39c6827-9dc3-482d-a268-8ba9348b925e" path="/var/lib/kubelet/pods/c39c6827-9dc3-482d-a268-8ba9348b925e/volumes" Nov 28 06:49:49 crc kubenswrapper[4955]: I1128 06:49:49.790258 4955 generic.go:334] "Generic (PLEG): container finished" podID="a95ee68c-d5b2-490f-a4e4-33bb8bb56536" containerID="4ff3ce2af4b027af6ddfc437d0831566ae2a9e30b966e71d04a12a5d7da4d9f6" exitCode=0 Nov 28 06:49:49 crc kubenswrapper[4955]: I1128 06:49:49.790306 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" event={"ID":"a95ee68c-d5b2-490f-a4e4-33bb8bb56536","Type":"ContainerDied","Data":"4ff3ce2af4b027af6ddfc437d0831566ae2a9e30b966e71d04a12a5d7da4d9f6"} Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.311407 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.461500 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwq9t\" (UniqueName: \"kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t\") pod \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.461630 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key\") pod \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.461819 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory\") pod \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\" (UID: \"a95ee68c-d5b2-490f-a4e4-33bb8bb56536\") " Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.470294 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t" (OuterVolumeSpecName: "kube-api-access-lwq9t") pod "a95ee68c-d5b2-490f-a4e4-33bb8bb56536" (UID: "a95ee68c-d5b2-490f-a4e4-33bb8bb56536"). InnerVolumeSpecName "kube-api-access-lwq9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.489994 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory" (OuterVolumeSpecName: "inventory") pod "a95ee68c-d5b2-490f-a4e4-33bb8bb56536" (UID: "a95ee68c-d5b2-490f-a4e4-33bb8bb56536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.500573 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a95ee68c-d5b2-490f-a4e4-33bb8bb56536" (UID: "a95ee68c-d5b2-490f-a4e4-33bb8bb56536"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.564547 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.564596 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwq9t\" (UniqueName: \"kubernetes.io/projected/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-kube-api-access-lwq9t\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.564615 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a95ee68c-d5b2-490f-a4e4-33bb8bb56536-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.824130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" event={"ID":"a95ee68c-d5b2-490f-a4e4-33bb8bb56536","Type":"ContainerDied","Data":"93939ce8a3fb943dadb9f5326e3c7295de221b1671257a7b5a1c68562f229b61"} Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.824170 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jck8m" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.824174 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93939ce8a3fb943dadb9f5326e3c7295de221b1671257a7b5a1c68562f229b61" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.901000 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76"] Nov 28 06:49:51 crc kubenswrapper[4955]: E1128 06:49:51.901876 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95ee68c-d5b2-490f-a4e4-33bb8bb56536" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.901990 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95ee68c-d5b2-490f-a4e4-33bb8bb56536" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.902267 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="a95ee68c-d5b2-490f-a4e4-33bb8bb56536" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.902904 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.916452 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76"] Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.916935 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.917618 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.917765 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.917887 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.973923 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.975244 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvct6\" (UniqueName: \"kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:51 crc kubenswrapper[4955]: I1128 06:49:51.975398 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.077745 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvct6\" (UniqueName: \"kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.077816 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.077884 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.083174 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.088102 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.107786 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvct6\" (UniqueName: \"kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2jx76\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.272361 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.704947 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:49:52 crc kubenswrapper[4955]: E1128 06:49:52.706012 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.791657 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76"] Nov 28 06:49:52 crc kubenswrapper[4955]: I1128 06:49:52.836776 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" event={"ID":"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d","Type":"ContainerStarted","Data":"09c7163fe9dc43859cfa0b1cbbb2404b09f618b80933e61ee72494926107b232"} Nov 28 06:49:53 crc kubenswrapper[4955]: I1128 06:49:53.846185 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" event={"ID":"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d","Type":"ContainerStarted","Data":"f1841147f08c46fd642bc8cfcefc1b255e1431d35719cf8c4343a6c19110fd01"} Nov 28 06:49:53 crc kubenswrapper[4955]: I1128 06:49:53.867096 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" podStartSLOduration=2.375268524 podStartE2EDuration="2.867082199s" podCreationTimestamp="2025-11-28 06:49:51 +0000 UTC" firstStartedPulling="2025-11-28 06:49:52.800790172 +0000 UTC m=+1715.390045742" lastFinishedPulling="2025-11-28 06:49:53.292603817 +0000 UTC m=+1715.881859417" observedRunningTime="2025-11-28 06:49:53.865123394 +0000 UTC m=+1716.454378974" watchObservedRunningTime="2025-11-28 06:49:53.867082199 +0000 UTC m=+1716.456337769" Nov 28 06:50:04 crc kubenswrapper[4955]: I1128 06:50:04.704271 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:50:04 crc kubenswrapper[4955]: E1128 06:50:04.705043 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:50:16 crc kubenswrapper[4955]: I1128 06:50:16.705496 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:50:16 crc kubenswrapper[4955]: E1128 06:50:16.706780 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.052876 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pgx4c"] Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.064226 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pgx4c"] Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.655374 4955 scope.go:117] "RemoveContainer" containerID="9382abf70a398fe8d51cd5f1613349f40e94b7d715e98cdd1de46edc67f0a9ac" Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.710298 4955 scope.go:117] "RemoveContainer" containerID="a5ed897781d3623593458af5cd707583a496e16dc664b58fcb2d35086cffc6f6" Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.744576 4955 scope.go:117] "RemoveContainer" containerID="e9bdb652f84a64351bdc8fbca14f55c3a9a4c49d969670a28d027efe545304fe" Nov 28 06:50:26 crc kubenswrapper[4955]: I1128 06:50:26.793537 4955 scope.go:117] "RemoveContainer" containerID="29b5074a5ec47e9df1b5a1ec376b9b74bd8e66bae0e08f34fe8c5df01f665d85" Nov 28 06:50:27 crc kubenswrapper[4955]: I1128 06:50:27.042778 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-32f5-account-create-update-wd6jr"] Nov 28 06:50:27 crc kubenswrapper[4955]: I1128 06:50:27.056827 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-32f5-account-create-update-wd6jr"] Nov 28 06:50:27 crc kubenswrapper[4955]: I1128 06:50:27.717459 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123962fc-cb22-41e5-92c2-fce487c07003" path="/var/lib/kubelet/pods/123962fc-cb22-41e5-92c2-fce487c07003/volumes" Nov 28 06:50:27 crc kubenswrapper[4955]: I1128 06:50:27.718182 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="530951b8-e0b5-44a3-aaf5-48c74bf91dba" path="/var/lib/kubelet/pods/530951b8-e0b5-44a3-aaf5-48c74bf91dba/volumes" Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.030433 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-j944d"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.041751 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-38fa-account-create-update-l7s8x"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.052630 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fc74-account-create-update-zxq7m"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.060807 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-j944d"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.068870 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fc74-account-create-update-zxq7m"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.077739 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-38fa-account-create-update-l7s8x"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.085757 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-b2j6d"] Nov 28 06:50:28 crc kubenswrapper[4955]: I1128 06:50:28.094106 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-b2j6d"] Nov 28 06:50:29 crc kubenswrapper[4955]: I1128 06:50:29.721862 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071d5666-d995-439f-b56a-c4feb0b11ce1" path="/var/lib/kubelet/pods/071d5666-d995-439f-b56a-c4feb0b11ce1/volumes" Nov 28 06:50:29 crc kubenswrapper[4955]: I1128 06:50:29.723040 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b5c523-cb74-4ad4-b14b-0eefd62138b1" path="/var/lib/kubelet/pods/a3b5c523-cb74-4ad4-b14b-0eefd62138b1/volumes" Nov 28 06:50:29 crc kubenswrapper[4955]: I1128 06:50:29.724264 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c059ac08-bd13-4b84-a39f-f1a9e2260b5e" path="/var/lib/kubelet/pods/c059ac08-bd13-4b84-a39f-f1a9e2260b5e/volumes" Nov 28 06:50:29 crc kubenswrapper[4955]: I1128 06:50:29.725469 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df327a4d-740c-44af-aeab-e196f406408d" path="/var/lib/kubelet/pods/df327a4d-740c-44af-aeab-e196f406408d/volumes" Nov 28 06:50:30 crc kubenswrapper[4955]: I1128 06:50:30.705435 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:50:30 crc kubenswrapper[4955]: E1128 06:50:30.706474 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:50:33 crc kubenswrapper[4955]: I1128 06:50:33.249083 4955 generic.go:334] "Generic (PLEG): container finished" podID="86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" containerID="f1841147f08c46fd642bc8cfcefc1b255e1431d35719cf8c4343a6c19110fd01" exitCode=0 Nov 28 06:50:33 crc kubenswrapper[4955]: I1128 06:50:33.249140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" event={"ID":"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d","Type":"ContainerDied","Data":"f1841147f08c46fd642bc8cfcefc1b255e1431d35719cf8c4343a6c19110fd01"} Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.713393 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.775205 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory\") pod \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.775443 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvct6\" (UniqueName: \"kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6\") pod \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.775491 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key\") pod \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\" (UID: \"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d\") " Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.782983 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6" (OuterVolumeSpecName: "kube-api-access-fvct6") pod "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" (UID: "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d"). InnerVolumeSpecName "kube-api-access-fvct6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.802917 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory" (OuterVolumeSpecName: "inventory") pod "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" (UID: "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.804480 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" (UID: "86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.878238 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvct6\" (UniqueName: \"kubernetes.io/projected/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-kube-api-access-fvct6\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.878274 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:34 crc kubenswrapper[4955]: I1128 06:50:34.878286 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.273348 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" event={"ID":"86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d","Type":"ContainerDied","Data":"09c7163fe9dc43859cfa0b1cbbb2404b09f618b80933e61ee72494926107b232"} Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.273822 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c7163fe9dc43859cfa0b1cbbb2404b09f618b80933e61ee72494926107b232" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.273424 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2jx76" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.375199 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9"] Nov 28 06:50:35 crc kubenswrapper[4955]: E1128 06:50:35.375917 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.376036 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.376366 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.377258 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.379728 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.379939 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.380262 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.381001 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.408470 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9"] Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.490292 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdsp\" (UniqueName: \"kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.490371 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.490401 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.592495 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.592595 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.592963 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdsp\" (UniqueName: \"kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.596934 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.598213 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.615809 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdsp\" (UniqueName: \"kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:35 crc kubenswrapper[4955]: I1128 06:50:35.702067 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:50:36 crc kubenswrapper[4955]: I1128 06:50:36.237808 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9"] Nov 28 06:50:36 crc kubenswrapper[4955]: I1128 06:50:36.283474 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" event={"ID":"40082d1e-0844-4d3d-9c68-25fb8eb44351","Type":"ContainerStarted","Data":"606ed69a8ab46b1950f764890c6fecf958184d5520f5b4f1ec7e842b3dc56b37"} Nov 28 06:50:37 crc kubenswrapper[4955]: I1128 06:50:37.293151 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" event={"ID":"40082d1e-0844-4d3d-9c68-25fb8eb44351","Type":"ContainerStarted","Data":"b9db4333a097b9cf078bce26023d602e89fc7cd139bf2f889bf7b6ae0bc92d37"} Nov 28 06:50:37 crc kubenswrapper[4955]: I1128 06:50:37.308715 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" podStartSLOduration=1.692609496 podStartE2EDuration="2.308692945s" podCreationTimestamp="2025-11-28 06:50:35 +0000 UTC" firstStartedPulling="2025-11-28 06:50:36.236050667 +0000 UTC m=+1758.825306227" lastFinishedPulling="2025-11-28 06:50:36.852134106 +0000 UTC m=+1759.441389676" observedRunningTime="2025-11-28 06:50:37.308469588 +0000 UTC m=+1759.897725148" watchObservedRunningTime="2025-11-28 06:50:37.308692945 +0000 UTC m=+1759.897948535" Nov 28 06:50:43 crc kubenswrapper[4955]: I1128 06:50:43.705835 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:50:43 crc kubenswrapper[4955]: E1128 06:50:43.707074 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:50:55 crc kubenswrapper[4955]: I1128 06:50:55.045486 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t2nkh"] Nov 28 06:50:55 crc kubenswrapper[4955]: I1128 06:50:55.055722 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-t2nkh"] Nov 28 06:50:55 crc kubenswrapper[4955]: I1128 06:50:55.718472 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92884a73-5a0a-4a22-9919-03c0c4c6829d" path="/var/lib/kubelet/pods/92884a73-5a0a-4a22-9919-03c0c4c6829d/volumes" Nov 28 06:50:57 crc kubenswrapper[4955]: I1128 06:50:57.714006 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:50:57 crc kubenswrapper[4955]: E1128 06:50:57.714592 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:51:10 crc kubenswrapper[4955]: I1128 06:51:10.705260 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:51:10 crc kubenswrapper[4955]: E1128 06:51:10.706576 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:51:18 crc kubenswrapper[4955]: I1128 06:51:18.077787 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qsxnq"] Nov 28 06:51:18 crc kubenswrapper[4955]: I1128 06:51:18.095193 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xkk9"] Nov 28 06:51:18 crc kubenswrapper[4955]: I1128 06:51:18.106515 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-2xkk9"] Nov 28 06:51:18 crc kubenswrapper[4955]: I1128 06:51:18.121475 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qsxnq"] Nov 28 06:51:19 crc kubenswrapper[4955]: I1128 06:51:19.719232 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b859bc-9a75-493a-8a9e-7712775f51c9" path="/var/lib/kubelet/pods/59b859bc-9a75-493a-8a9e-7712775f51c9/volumes" Nov 28 06:51:19 crc kubenswrapper[4955]: I1128 06:51:19.720772 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa3a596-0803-435d-8362-7b80edd615cd" path="/var/lib/kubelet/pods/7aa3a596-0803-435d-8362-7b80edd615cd/volumes" Nov 28 06:51:25 crc kubenswrapper[4955]: I1128 06:51:25.704221 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:51:26 crc kubenswrapper[4955]: I1128 06:51:26.833990 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c"} Nov 28 06:51:26 crc kubenswrapper[4955]: I1128 06:51:26.950057 4955 scope.go:117] "RemoveContainer" containerID="275494a6e9fea0ad62ca766e238b226733177049809e65a1415c691888b09c19" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.001484 4955 scope.go:117] "RemoveContainer" containerID="c3e614bbaad971ec28217fa5ede2c05ba656679b8cbef022e619f4af6bed57ea" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.026675 4955 scope.go:117] "RemoveContainer" containerID="801c361689dbcfd660a5459c7211b846dfb242c2774d661203a9450d51e680f6" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.077162 4955 scope.go:117] "RemoveContainer" containerID="bcdc956db5ad451139028ac7ba3d67c105b88c3119cc68aee02e711c5bccb203" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.108647 4955 scope.go:117] "RemoveContainer" containerID="cbee6232c8e3d77f0e42270111c72c7ffca2f564d4086cd2a6f1fc23522abea1" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.146569 4955 scope.go:117] "RemoveContainer" containerID="7ac7e9d6ded59c025760d3b72852c742c1ae6dc0820eb1b020a6ac5545342105" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.201526 4955 scope.go:117] "RemoveContainer" containerID="68d4d5223f6eeaf7850d9c12c4e08f234f20850eff14c4503a4d6dd45471e815" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.237090 4955 scope.go:117] "RemoveContainer" containerID="5875f58846ca9808fcd2e50acb546eadaa6919541781087c149e0d35bfa2ef88" Nov 28 06:51:27 crc kubenswrapper[4955]: I1128 06:51:27.257462 4955 scope.go:117] "RemoveContainer" containerID="539518ae979c619e26be06d94055175af6fc8d73359f9303576a6d8784eec246" Nov 28 06:51:34 crc kubenswrapper[4955]: I1128 06:51:34.935498 4955 generic.go:334] "Generic (PLEG): container finished" podID="40082d1e-0844-4d3d-9c68-25fb8eb44351" containerID="b9db4333a097b9cf078bce26023d602e89fc7cd139bf2f889bf7b6ae0bc92d37" exitCode=0 Nov 28 06:51:34 crc kubenswrapper[4955]: I1128 06:51:34.935785 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" event={"ID":"40082d1e-0844-4d3d-9c68-25fb8eb44351","Type":"ContainerDied","Data":"b9db4333a097b9cf078bce26023d602e89fc7cd139bf2f889bf7b6ae0bc92d37"} Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.507438 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.605284 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory\") pod \"40082d1e-0844-4d3d-9c68-25fb8eb44351\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.605458 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key\") pod \"40082d1e-0844-4d3d-9c68-25fb8eb44351\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.605607 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcdsp\" (UniqueName: \"kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp\") pod \"40082d1e-0844-4d3d-9c68-25fb8eb44351\" (UID: \"40082d1e-0844-4d3d-9c68-25fb8eb44351\") " Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.612714 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp" (OuterVolumeSpecName: "kube-api-access-wcdsp") pod "40082d1e-0844-4d3d-9c68-25fb8eb44351" (UID: "40082d1e-0844-4d3d-9c68-25fb8eb44351"). InnerVolumeSpecName "kube-api-access-wcdsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.639713 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40082d1e-0844-4d3d-9c68-25fb8eb44351" (UID: "40082d1e-0844-4d3d-9c68-25fb8eb44351"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.660446 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory" (OuterVolumeSpecName: "inventory") pod "40082d1e-0844-4d3d-9c68-25fb8eb44351" (UID: "40082d1e-0844-4d3d-9c68-25fb8eb44351"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.707267 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.707301 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcdsp\" (UniqueName: \"kubernetes.io/projected/40082d1e-0844-4d3d-9c68-25fb8eb44351-kube-api-access-wcdsp\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.707314 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40082d1e-0844-4d3d-9c68-25fb8eb44351-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.958232 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" event={"ID":"40082d1e-0844-4d3d-9c68-25fb8eb44351","Type":"ContainerDied","Data":"606ed69a8ab46b1950f764890c6fecf958184d5520f5b4f1ec7e842b3dc56b37"} Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.958307 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606ed69a8ab46b1950f764890c6fecf958184d5520f5b4f1ec7e842b3dc56b37" Nov 28 06:51:36 crc kubenswrapper[4955]: I1128 06:51:36.958344 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.179372 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hqrs6"] Nov 28 06:51:37 crc kubenswrapper[4955]: E1128 06:51:37.180024 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40082d1e-0844-4d3d-9c68-25fb8eb44351" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.180042 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="40082d1e-0844-4d3d-9c68-25fb8eb44351" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.180239 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="40082d1e-0844-4d3d-9c68-25fb8eb44351" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.180914 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.188045 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.188400 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.188881 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.189124 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.194268 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hqrs6"] Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.216641 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.216796 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wkmf\" (UniqueName: \"kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.216866 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.318645 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.318787 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wkmf\" (UniqueName: \"kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.318870 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.324105 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.327747 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.338196 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wkmf\" (UniqueName: \"kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf\") pod \"ssh-known-hosts-edpm-deployment-hqrs6\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:37 crc kubenswrapper[4955]: I1128 06:51:37.511678 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:38 crc kubenswrapper[4955]: I1128 06:51:38.076410 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-hqrs6"] Nov 28 06:51:38 crc kubenswrapper[4955]: I1128 06:51:38.976985 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" event={"ID":"6788af76-b07b-492d-b4bb-dceb2d35b853","Type":"ContainerStarted","Data":"f4c3b12ad7eb5dd8113b3f912c2762d538d72cfa77f2a47181f7ed2adcf47c91"} Nov 28 06:51:39 crc kubenswrapper[4955]: I1128 06:51:39.987787 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" event={"ID":"6788af76-b07b-492d-b4bb-dceb2d35b853","Type":"ContainerStarted","Data":"9cdbec55e7ac4c33e73e895680bc5ed7ba703225f307ad5ebd204280465ed9ba"} Nov 28 06:51:40 crc kubenswrapper[4955]: I1128 06:51:40.005229 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" podStartSLOduration=2.322755796 podStartE2EDuration="3.005203122s" podCreationTimestamp="2025-11-28 06:51:37 +0000 UTC" firstStartedPulling="2025-11-28 06:51:38.089795036 +0000 UTC m=+1820.679050616" lastFinishedPulling="2025-11-28 06:51:38.772242372 +0000 UTC m=+1821.361497942" observedRunningTime="2025-11-28 06:51:40.001865998 +0000 UTC m=+1822.591121608" watchObservedRunningTime="2025-11-28 06:51:40.005203122 +0000 UTC m=+1822.594458702" Nov 28 06:51:47 crc kubenswrapper[4955]: I1128 06:51:47.062278 4955 generic.go:334] "Generic (PLEG): container finished" podID="6788af76-b07b-492d-b4bb-dceb2d35b853" containerID="9cdbec55e7ac4c33e73e895680bc5ed7ba703225f307ad5ebd204280465ed9ba" exitCode=0 Nov 28 06:51:47 crc kubenswrapper[4955]: I1128 06:51:47.062369 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" event={"ID":"6788af76-b07b-492d-b4bb-dceb2d35b853","Type":"ContainerDied","Data":"9cdbec55e7ac4c33e73e895680bc5ed7ba703225f307ad5ebd204280465ed9ba"} Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.546300 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.632544 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wkmf\" (UniqueName: \"kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf\") pod \"6788af76-b07b-492d-b4bb-dceb2d35b853\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.632731 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0\") pod \"6788af76-b07b-492d-b4bb-dceb2d35b853\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.632799 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam\") pod \"6788af76-b07b-492d-b4bb-dceb2d35b853\" (UID: \"6788af76-b07b-492d-b4bb-dceb2d35b853\") " Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.639633 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf" (OuterVolumeSpecName: "kube-api-access-7wkmf") pod "6788af76-b07b-492d-b4bb-dceb2d35b853" (UID: "6788af76-b07b-492d-b4bb-dceb2d35b853"). InnerVolumeSpecName "kube-api-access-7wkmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.668578 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6788af76-b07b-492d-b4bb-dceb2d35b853" (UID: "6788af76-b07b-492d-b4bb-dceb2d35b853"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.671901 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "6788af76-b07b-492d-b4bb-dceb2d35b853" (UID: "6788af76-b07b-492d-b4bb-dceb2d35b853"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.734379 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.734410 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wkmf\" (UniqueName: \"kubernetes.io/projected/6788af76-b07b-492d-b4bb-dceb2d35b853-kube-api-access-7wkmf\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:48 crc kubenswrapper[4955]: I1128 06:51:48.734421 4955 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/6788af76-b07b-492d-b4bb-dceb2d35b853-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.089432 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" event={"ID":"6788af76-b07b-492d-b4bb-dceb2d35b853","Type":"ContainerDied","Data":"f4c3b12ad7eb5dd8113b3f912c2762d538d72cfa77f2a47181f7ed2adcf47c91"} Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.089743 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c3b12ad7eb5dd8113b3f912c2762d538d72cfa77f2a47181f7ed2adcf47c91" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.089917 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-hqrs6" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.237745 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq"] Nov 28 06:51:49 crc kubenswrapper[4955]: E1128 06:51:49.238248 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6788af76-b07b-492d-b4bb-dceb2d35b853" containerName="ssh-known-hosts-edpm-deployment" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.238272 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6788af76-b07b-492d-b4bb-dceb2d35b853" containerName="ssh-known-hosts-edpm-deployment" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.238471 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6788af76-b07b-492d-b4bb-dceb2d35b853" containerName="ssh-known-hosts-edpm-deployment" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.239244 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.242319 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.242535 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.246805 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.247036 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.267030 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq"] Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.350112 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmdq\" (UniqueName: \"kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.350197 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.350241 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.452171 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.452253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.452374 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmdq\" (UniqueName: \"kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.458125 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.458270 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.471752 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmdq\" (UniqueName: \"kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzmhq\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:49 crc kubenswrapper[4955]: I1128 06:51:49.558411 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:51:50 crc kubenswrapper[4955]: I1128 06:51:50.078787 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq"] Nov 28 06:51:50 crc kubenswrapper[4955]: I1128 06:51:50.098200 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" event={"ID":"41f03c76-5015-4f05-bf3d-0c21610c1a50","Type":"ContainerStarted","Data":"24dc5f8dba8cde26ea6900b756e0843d85de96e96dfab6d005ac314155024546"} Nov 28 06:51:51 crc kubenswrapper[4955]: I1128 06:51:51.106668 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" event={"ID":"41f03c76-5015-4f05-bf3d-0c21610c1a50","Type":"ContainerStarted","Data":"8295cb99a90cf3e0c94102d837841225d81eecf063604e0d1264c7af18db5577"} Nov 28 06:51:51 crc kubenswrapper[4955]: I1128 06:51:51.135078 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" podStartSLOduration=1.631143834 podStartE2EDuration="2.135055432s" podCreationTimestamp="2025-11-28 06:51:49 +0000 UTC" firstStartedPulling="2025-11-28 06:51:50.08345412 +0000 UTC m=+1832.672709700" lastFinishedPulling="2025-11-28 06:51:50.587365688 +0000 UTC m=+1833.176621298" observedRunningTime="2025-11-28 06:51:51.127665963 +0000 UTC m=+1833.716921563" watchObservedRunningTime="2025-11-28 06:51:51.135055432 +0000 UTC m=+1833.724311012" Nov 28 06:52:00 crc kubenswrapper[4955]: I1128 06:52:00.196761 4955 generic.go:334] "Generic (PLEG): container finished" podID="41f03c76-5015-4f05-bf3d-0c21610c1a50" containerID="8295cb99a90cf3e0c94102d837841225d81eecf063604e0d1264c7af18db5577" exitCode=0 Nov 28 06:52:00 crc kubenswrapper[4955]: I1128 06:52:00.196897 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" event={"ID":"41f03c76-5015-4f05-bf3d-0c21610c1a50","Type":"ContainerDied","Data":"8295cb99a90cf3e0c94102d837841225d81eecf063604e0d1264c7af18db5577"} Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.625748 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.810156 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory\") pod \"41f03c76-5015-4f05-bf3d-0c21610c1a50\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.810342 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key\") pod \"41f03c76-5015-4f05-bf3d-0c21610c1a50\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.810420 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llmdq\" (UniqueName: \"kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq\") pod \"41f03c76-5015-4f05-bf3d-0c21610c1a50\" (UID: \"41f03c76-5015-4f05-bf3d-0c21610c1a50\") " Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.816815 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq" (OuterVolumeSpecName: "kube-api-access-llmdq") pod "41f03c76-5015-4f05-bf3d-0c21610c1a50" (UID: "41f03c76-5015-4f05-bf3d-0c21610c1a50"). InnerVolumeSpecName "kube-api-access-llmdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.849462 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory" (OuterVolumeSpecName: "inventory") pod "41f03c76-5015-4f05-bf3d-0c21610c1a50" (UID: "41f03c76-5015-4f05-bf3d-0c21610c1a50"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.852748 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "41f03c76-5015-4f05-bf3d-0c21610c1a50" (UID: "41f03c76-5015-4f05-bf3d-0c21610c1a50"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.912355 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.912400 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llmdq\" (UniqueName: \"kubernetes.io/projected/41f03c76-5015-4f05-bf3d-0c21610c1a50-kube-api-access-llmdq\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:01 crc kubenswrapper[4955]: I1128 06:52:01.912411 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41f03c76-5015-4f05-bf3d-0c21610c1a50-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.223726 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" event={"ID":"41f03c76-5015-4f05-bf3d-0c21610c1a50","Type":"ContainerDied","Data":"24dc5f8dba8cde26ea6900b756e0843d85de96e96dfab6d005ac314155024546"} Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.223771 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24dc5f8dba8cde26ea6900b756e0843d85de96e96dfab6d005ac314155024546" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.223833 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzmhq" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.303219 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc"] Nov 28 06:52:02 crc kubenswrapper[4955]: E1128 06:52:02.303702 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f03c76-5015-4f05-bf3d-0c21610c1a50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.303723 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f03c76-5015-4f05-bf3d-0c21610c1a50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.303973 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f03c76-5015-4f05-bf3d-0c21610c1a50" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.304685 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.306880 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.307308 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.307414 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.307669 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.319050 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc"] Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.423243 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.423355 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.424785 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswmd\" (UniqueName: \"kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.526278 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.526590 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.526625 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswmd\" (UniqueName: \"kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.534762 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.537139 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.544056 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswmd\" (UniqueName: \"kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:02 crc kubenswrapper[4955]: I1128 06:52:02.636558 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:03 crc kubenswrapper[4955]: I1128 06:52:03.058626 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sstdc"] Nov 28 06:52:03 crc kubenswrapper[4955]: I1128 06:52:03.068840 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sstdc"] Nov 28 06:52:03 crc kubenswrapper[4955]: I1128 06:52:03.252739 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc"] Nov 28 06:52:03 crc kubenswrapper[4955]: I1128 06:52:03.726843 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc75b27e-406c-4d68-868c-75c33da792ab" path="/var/lib/kubelet/pods/dc75b27e-406c-4d68-868c-75c33da792ab/volumes" Nov 28 06:52:04 crc kubenswrapper[4955]: I1128 06:52:04.258614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" event={"ID":"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d","Type":"ContainerStarted","Data":"b2aebaecc1972c7addf0ebb294719b06f37916f6837a19a0055ea01e8aa63213"} Nov 28 06:52:05 crc kubenswrapper[4955]: I1128 06:52:05.272547 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" event={"ID":"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d","Type":"ContainerStarted","Data":"cc8b5f4d19ca65b0146a68a4e66c4bf07617917b630c175ef2811cb03bec8ef6"} Nov 28 06:52:05 crc kubenswrapper[4955]: I1128 06:52:05.312539 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" podStartSLOduration=2.514603048 podStartE2EDuration="3.312499158s" podCreationTimestamp="2025-11-28 06:52:02 +0000 UTC" firstStartedPulling="2025-11-28 06:52:03.295332965 +0000 UTC m=+1845.884588525" lastFinishedPulling="2025-11-28 06:52:04.093229045 +0000 UTC m=+1846.682484635" observedRunningTime="2025-11-28 06:52:05.29877547 +0000 UTC m=+1847.888031050" watchObservedRunningTime="2025-11-28 06:52:05.312499158 +0000 UTC m=+1847.901754738" Nov 28 06:52:14 crc kubenswrapper[4955]: I1128 06:52:14.372326 4955 generic.go:334] "Generic (PLEG): container finished" podID="6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" containerID="cc8b5f4d19ca65b0146a68a4e66c4bf07617917b630c175ef2811cb03bec8ef6" exitCode=0 Nov 28 06:52:14 crc kubenswrapper[4955]: I1128 06:52:14.372460 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" event={"ID":"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d","Type":"ContainerDied","Data":"cc8b5f4d19ca65b0146a68a4e66c4bf07617917b630c175ef2811cb03bec8ef6"} Nov 28 06:52:15 crc kubenswrapper[4955]: I1128 06:52:15.928770 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.110783 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswmd\" (UniqueName: \"kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd\") pod \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.111028 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key\") pod \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.111113 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory\") pod \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\" (UID: \"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d\") " Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.120549 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd" (OuterVolumeSpecName: "kube-api-access-cswmd") pod "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" (UID: "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d"). InnerVolumeSpecName "kube-api-access-cswmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.165656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" (UID: "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.182569 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory" (OuterVolumeSpecName: "inventory") pod "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" (UID: "6477c9e8-dda5-46fe-8b80-3ccc99f2b00d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.214761 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswmd\" (UniqueName: \"kubernetes.io/projected/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-kube-api-access-cswmd\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.214814 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.214840 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6477c9e8-dda5-46fe-8b80-3ccc99f2b00d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.400499 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" event={"ID":"6477c9e8-dda5-46fe-8b80-3ccc99f2b00d","Type":"ContainerDied","Data":"b2aebaecc1972c7addf0ebb294719b06f37916f6837a19a0055ea01e8aa63213"} Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.400784 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2aebaecc1972c7addf0ebb294719b06f37916f6837a19a0055ea01e8aa63213" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.400629 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.521026 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s"] Nov 28 06:52:16 crc kubenswrapper[4955]: E1128 06:52:16.522108 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.522146 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.522455 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="6477c9e8-dda5-46fe-8b80-3ccc99f2b00d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.523439 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.531671 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.532083 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.532275 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.532893 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.533117 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.533371 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.533616 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.533842 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.560063 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s"] Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623434 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623496 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623546 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623576 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623618 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623657 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623680 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623705 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623781 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623829 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623863 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7jt\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623887 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623946 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.623970 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725240 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725752 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725802 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725855 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725918 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.725981 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726023 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726078 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726124 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726188 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726253 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7jt\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726294 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726378 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.726418 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.732760 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.736083 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.736632 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740122 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740390 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740542 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740625 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740630 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.740944 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.741591 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.741984 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.742027 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.742102 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.760836 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7jt\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5q92s\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:16 crc kubenswrapper[4955]: I1128 06:52:16.863171 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:52:17 crc kubenswrapper[4955]: I1128 06:52:17.450148 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s"] Nov 28 06:52:18 crc kubenswrapper[4955]: I1128 06:52:18.108978 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:52:18 crc kubenswrapper[4955]: I1128 06:52:18.424205 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" event={"ID":"93204339-2c92-4d5d-a519-402ee3a45e79","Type":"ContainerStarted","Data":"112ece0ed353d9cc09c17526e1b0bce9a5aab9343a3790de834843086d667287"} Nov 28 06:52:19 crc kubenswrapper[4955]: I1128 06:52:19.436561 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" event={"ID":"93204339-2c92-4d5d-a519-402ee3a45e79","Type":"ContainerStarted","Data":"ac7d1b63bab27d8704d600ade42fd03b6500351f5ef16ba5d698fd2d2a929c51"} Nov 28 06:52:19 crc kubenswrapper[4955]: I1128 06:52:19.456456 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" podStartSLOduration=2.812265584 podStartE2EDuration="3.456435777s" podCreationTimestamp="2025-11-28 06:52:16 +0000 UTC" firstStartedPulling="2025-11-28 06:52:17.460441662 +0000 UTC m=+1860.049697242" lastFinishedPulling="2025-11-28 06:52:18.104611855 +0000 UTC m=+1860.693867435" observedRunningTime="2025-11-28 06:52:19.452244248 +0000 UTC m=+1862.041499848" watchObservedRunningTime="2025-11-28 06:52:19.456435777 +0000 UTC m=+1862.045691377" Nov 28 06:52:27 crc kubenswrapper[4955]: I1128 06:52:27.438459 4955 scope.go:117] "RemoveContainer" containerID="c26f3901b7a027d70391681e78315809639478f84c71b57747b4de3caa2be945" Nov 28 06:53:00 crc kubenswrapper[4955]: I1128 06:53:00.837120 4955 generic.go:334] "Generic (PLEG): container finished" podID="93204339-2c92-4d5d-a519-402ee3a45e79" containerID="ac7d1b63bab27d8704d600ade42fd03b6500351f5ef16ba5d698fd2d2a929c51" exitCode=0 Nov 28 06:53:00 crc kubenswrapper[4955]: I1128 06:53:00.837192 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" event={"ID":"93204339-2c92-4d5d-a519-402ee3a45e79","Type":"ContainerDied","Data":"ac7d1b63bab27d8704d600ade42fd03b6500351f5ef16ba5d698fd2d2a929c51"} Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.353771 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.354809 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.354868 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.354930 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.354956 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355004 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355037 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355077 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355102 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355147 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355175 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7jt\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.355197 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.362973 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.362994 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt" (OuterVolumeSpecName: "kube-api-access-hj7jt") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "kube-api-access-hj7jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.363419 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.364136 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.364492 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.364695 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.366478 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.370897 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.380394 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.402688 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.415656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory" (OuterVolumeSpecName: "inventory") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.456720 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.457140 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.457205 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle\") pod \"93204339-2c92-4d5d-a519-402ee3a45e79\" (UID: \"93204339-2c92-4d5d-a519-402ee3a45e79\") " Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458082 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458117 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458154 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458169 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458183 4955 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458195 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458228 4955 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458244 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458256 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458267 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7jt\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-kube-api-access-hj7jt\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.458279 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.460962 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.461351 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.462816 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "93204339-2c92-4d5d-a519-402ee3a45e79" (UID: "93204339-2c92-4d5d-a519-402ee3a45e79"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.558890 4955 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.558923 4955 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93204339-2c92-4d5d-a519-402ee3a45e79-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.558933 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/93204339-2c92-4d5d-a519-402ee3a45e79-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.858265 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" event={"ID":"93204339-2c92-4d5d-a519-402ee3a45e79","Type":"ContainerDied","Data":"112ece0ed353d9cc09c17526e1b0bce9a5aab9343a3790de834843086d667287"} Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.858726 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="112ece0ed353d9cc09c17526e1b0bce9a5aab9343a3790de834843086d667287" Nov 28 06:53:02 crc kubenswrapper[4955]: I1128 06:53:02.858335 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5q92s" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.060135 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w"] Nov 28 06:53:03 crc kubenswrapper[4955]: E1128 06:53:03.060586 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93204339-2c92-4d5d-a519-402ee3a45e79" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.060607 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="93204339-2c92-4d5d-a519-402ee3a45e79" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.060813 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="93204339-2c92-4d5d-a519-402ee3a45e79" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.061574 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.064096 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.065053 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.065222 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.065642 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.066280 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.070763 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.070846 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.070898 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.071003 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.071102 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dncq8\" (UniqueName: \"kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.082726 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w"] Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.172388 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.172449 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.172487 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.172543 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.172597 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dncq8\" (UniqueName: \"kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.173933 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.176648 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.177054 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.177490 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.190356 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dncq8\" (UniqueName: \"kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dmn6w\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:03 crc kubenswrapper[4955]: I1128 06:53:03.432491 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:53:04 crc kubenswrapper[4955]: I1128 06:53:04.002049 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w"] Nov 28 06:53:04 crc kubenswrapper[4955]: W1128 06:53:04.005986 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e473921_1378_4318_89ef_7f2f39c41aed.slice/crio-5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a WatchSource:0}: Error finding container 5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a: Status 404 returned error can't find the container with id 5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a Nov 28 06:53:04 crc kubenswrapper[4955]: I1128 06:53:04.889496 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" event={"ID":"9e473921-1378-4318-89ef-7f2f39c41aed","Type":"ContainerStarted","Data":"2b5999e4fb34e3e6d795bab585882743b1576a2e946ecd3a53f7882b7b8b3fd9"} Nov 28 06:53:04 crc kubenswrapper[4955]: I1128 06:53:04.890218 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" event={"ID":"9e473921-1378-4318-89ef-7f2f39c41aed","Type":"ContainerStarted","Data":"5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a"} Nov 28 06:53:04 crc kubenswrapper[4955]: I1128 06:53:04.915945 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" podStartSLOduration=1.348593743 podStartE2EDuration="1.915903773s" podCreationTimestamp="2025-11-28 06:53:03 +0000 UTC" firstStartedPulling="2025-11-28 06:53:04.009875237 +0000 UTC m=+1906.599130817" lastFinishedPulling="2025-11-28 06:53:04.577185277 +0000 UTC m=+1907.166440847" observedRunningTime="2025-11-28 06:53:04.913894707 +0000 UTC m=+1907.503150297" watchObservedRunningTime="2025-11-28 06:53:04.915903773 +0000 UTC m=+1907.505159383" Nov 28 06:53:53 crc kubenswrapper[4955]: I1128 06:53:53.392822 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:53:53 crc kubenswrapper[4955]: I1128 06:53:53.393445 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:12 crc kubenswrapper[4955]: I1128 06:54:12.595950 4955 generic.go:334] "Generic (PLEG): container finished" podID="9e473921-1378-4318-89ef-7f2f39c41aed" containerID="2b5999e4fb34e3e6d795bab585882743b1576a2e946ecd3a53f7882b7b8b3fd9" exitCode=0 Nov 28 06:54:12 crc kubenswrapper[4955]: I1128 06:54:12.596060 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" event={"ID":"9e473921-1378-4318-89ef-7f2f39c41aed","Type":"ContainerDied","Data":"2b5999e4fb34e3e6d795bab585882743b1576a2e946ecd3a53f7882b7b8b3fd9"} Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.000679 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.714197 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle\") pod \"9e473921-1378-4318-89ef-7f2f39c41aed\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.714368 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dncq8\" (UniqueName: \"kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8\") pod \"9e473921-1378-4318-89ef-7f2f39c41aed\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.714443 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key\") pod \"9e473921-1378-4318-89ef-7f2f39c41aed\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.727736 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0\") pod \"9e473921-1378-4318-89ef-7f2f39c41aed\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.728236 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory\") pod \"9e473921-1378-4318-89ef-7f2f39c41aed\" (UID: \"9e473921-1378-4318-89ef-7f2f39c41aed\") " Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.742268 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8" (OuterVolumeSpecName: "kube-api-access-dncq8") pod "9e473921-1378-4318-89ef-7f2f39c41aed" (UID: "9e473921-1378-4318-89ef-7f2f39c41aed"). InnerVolumeSpecName "kube-api-access-dncq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.769382 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9e473921-1378-4318-89ef-7f2f39c41aed" (UID: "9e473921-1378-4318-89ef-7f2f39c41aed"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.772755 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" event={"ID":"9e473921-1378-4318-89ef-7f2f39c41aed","Type":"ContainerDied","Data":"5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a"} Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.772807 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e967f2e0fb367a60b89ed4ba7636622270e62c48a1b52047828f700157f010a" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.772893 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dmn6w" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.798924 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9e473921-1378-4318-89ef-7f2f39c41aed" (UID: "9e473921-1378-4318-89ef-7f2f39c41aed"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.802410 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory" (OuterVolumeSpecName: "inventory") pod "9e473921-1378-4318-89ef-7f2f39c41aed" (UID: "9e473921-1378-4318-89ef-7f2f39c41aed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.815636 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e473921-1378-4318-89ef-7f2f39c41aed" (UID: "9e473921-1378-4318-89ef-7f2f39c41aed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.831314 4955 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9e473921-1378-4318-89ef-7f2f39c41aed-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.831359 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.831376 4955 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.831393 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dncq8\" (UniqueName: \"kubernetes.io/projected/9e473921-1378-4318-89ef-7f2f39c41aed-kube-api-access-dncq8\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:14 crc kubenswrapper[4955]: I1128 06:54:14.831405 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e473921-1378-4318-89ef-7f2f39c41aed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.862272 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc"] Nov 28 06:54:15 crc kubenswrapper[4955]: E1128 06:54:15.863309 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e473921-1378-4318-89ef-7f2f39c41aed" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.863335 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e473921-1378-4318-89ef-7f2f39c41aed" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.863693 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e473921-1378-4318-89ef-7f2f39c41aed" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.864778 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.866863 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.867469 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.867665 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.867954 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.868389 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.868777 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.884773 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc"] Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.955193 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.955437 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.955668 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.955817 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ghl\" (UniqueName: \"kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.955873 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:15 crc kubenswrapper[4955]: I1128 06:54:15.956015 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057246 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057321 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057425 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057494 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057564 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ghl\" (UniqueName: \"kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.057592 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.063086 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.063388 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.064112 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.065042 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.066908 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.077435 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ghl\" (UniqueName: \"kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.196543 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.756969 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc"] Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.766205 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:54:16 crc kubenswrapper[4955]: I1128 06:54:16.834294 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" event={"ID":"17b265c1-83dd-4a5c-9e5b-92923c919d1d","Type":"ContainerStarted","Data":"83156b9dafe23cb42aeb73cb486d9cce66cf6f56d37ce0751361af3588b3aebd"} Nov 28 06:54:17 crc kubenswrapper[4955]: I1128 06:54:17.843463 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" event={"ID":"17b265c1-83dd-4a5c-9e5b-92923c919d1d","Type":"ContainerStarted","Data":"9ba7ce794e6c10058c6d41a3e4a6162186735816438361de3a7dd1488c99f8a6"} Nov 28 06:54:17 crc kubenswrapper[4955]: I1128 06:54:17.864648 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" podStartSLOduration=2.308825402 podStartE2EDuration="2.864628317s" podCreationTimestamp="2025-11-28 06:54:15 +0000 UTC" firstStartedPulling="2025-11-28 06:54:16.765926923 +0000 UTC m=+1979.355182493" lastFinishedPulling="2025-11-28 06:54:17.321729828 +0000 UTC m=+1979.910985408" observedRunningTime="2025-11-28 06:54:17.857519246 +0000 UTC m=+1980.446774816" watchObservedRunningTime="2025-11-28 06:54:17.864628317 +0000 UTC m=+1980.453883887" Nov 28 06:54:23 crc kubenswrapper[4955]: I1128 06:54:23.393409 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:54:23 crc kubenswrapper[4955]: I1128 06:54:23.393966 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:53 crc kubenswrapper[4955]: I1128 06:54:53.393661 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:54:53 crc kubenswrapper[4955]: I1128 06:54:53.395054 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:54:53 crc kubenswrapper[4955]: I1128 06:54:53.395155 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:54:53 crc kubenswrapper[4955]: I1128 06:54:53.396551 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:54:53 crc kubenswrapper[4955]: I1128 06:54:53.396659 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c" gracePeriod=600 Nov 28 06:54:54 crc kubenswrapper[4955]: I1128 06:54:54.214380 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c" exitCode=0 Nov 28 06:54:54 crc kubenswrapper[4955]: I1128 06:54:54.214451 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c"} Nov 28 06:54:54 crc kubenswrapper[4955]: I1128 06:54:54.215023 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38"} Nov 28 06:54:54 crc kubenswrapper[4955]: I1128 06:54:54.215049 4955 scope.go:117] "RemoveContainer" containerID="f733bc4798b7f7960c796ebbbb459920f38728482047a5c9b8052139b511f476" Nov 28 06:55:08 crc kubenswrapper[4955]: I1128 06:55:08.346826 4955 generic.go:334] "Generic (PLEG): container finished" podID="17b265c1-83dd-4a5c-9e5b-92923c919d1d" containerID="9ba7ce794e6c10058c6d41a3e4a6162186735816438361de3a7dd1488c99f8a6" exitCode=0 Nov 28 06:55:08 crc kubenswrapper[4955]: I1128 06:55:08.346871 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" event={"ID":"17b265c1-83dd-4a5c-9e5b-92923c919d1d","Type":"ContainerDied","Data":"9ba7ce794e6c10058c6d41a3e4a6162186735816438361de3a7dd1488c99f8a6"} Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.730298 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863243 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863456 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863539 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56ghl\" (UniqueName: \"kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863688 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.863719 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\" (UID: \"17b265c1-83dd-4a5c-9e5b-92923c919d1d\") " Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.871361 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl" (OuterVolumeSpecName: "kube-api-access-56ghl") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "kube-api-access-56ghl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.872021 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.894317 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.900625 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.906490 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory" (OuterVolumeSpecName: "inventory") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.912717 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17b265c1-83dd-4a5c-9e5b-92923c919d1d" (UID: "17b265c1-83dd-4a5c-9e5b-92923c919d1d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975532 4955 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975562 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975574 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56ghl\" (UniqueName: \"kubernetes.io/projected/17b265c1-83dd-4a5c-9e5b-92923c919d1d-kube-api-access-56ghl\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975584 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975593 4955 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:09 crc kubenswrapper[4955]: I1128 06:55:09.975602 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b265c1-83dd-4a5c-9e5b-92923c919d1d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.367364 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" event={"ID":"17b265c1-83dd-4a5c-9e5b-92923c919d1d","Type":"ContainerDied","Data":"83156b9dafe23cb42aeb73cb486d9cce66cf6f56d37ce0751361af3588b3aebd"} Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.367698 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83156b9dafe23cb42aeb73cb486d9cce66cf6f56d37ce0751361af3588b3aebd" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.367455 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.465599 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc"] Nov 28 06:55:10 crc kubenswrapper[4955]: E1128 06:55:10.466096 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b265c1-83dd-4a5c-9e5b-92923c919d1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.466122 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b265c1-83dd-4a5c-9e5b-92923c919d1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.466378 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b265c1-83dd-4a5c-9e5b-92923c919d1d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.467847 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.469798 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.470937 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.470998 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.472699 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.479019 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc"] Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.512168 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.607487 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-246jm\" (UniqueName: \"kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.607797 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.607872 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.607943 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.608019 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.709939 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-246jm\" (UniqueName: \"kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.710044 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.710075 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.710103 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.710132 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.714581 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.714772 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.714993 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.715799 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.743674 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-246jm\" (UniqueName: \"kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:10 crc kubenswrapper[4955]: I1128 06:55:10.828643 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:55:11 crc kubenswrapper[4955]: I1128 06:55:11.400197 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc"] Nov 28 06:55:12 crc kubenswrapper[4955]: I1128 06:55:12.390991 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" event={"ID":"845c1878-1788-4409-bbd8-a76a2f3eed71","Type":"ContainerStarted","Data":"7815f0b008a4252cb4e6c672eb4bb88e43cbf74ec153d32f4cd407f58088cbad"} Nov 28 06:55:12 crc kubenswrapper[4955]: I1128 06:55:12.392284 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" event={"ID":"845c1878-1788-4409-bbd8-a76a2f3eed71","Type":"ContainerStarted","Data":"fb27768bbed29b5848e617644bad3a7de743106755724c1ea892b4f764aa2a3d"} Nov 28 06:55:12 crc kubenswrapper[4955]: I1128 06:55:12.411786 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" podStartSLOduration=1.974172828 podStartE2EDuration="2.411767757s" podCreationTimestamp="2025-11-28 06:55:10 +0000 UTC" firstStartedPulling="2025-11-28 06:55:11.407821727 +0000 UTC m=+2033.997077307" lastFinishedPulling="2025-11-28 06:55:11.845416666 +0000 UTC m=+2034.434672236" observedRunningTime="2025-11-28 06:55:12.410215683 +0000 UTC m=+2034.999471273" watchObservedRunningTime="2025-11-28 06:55:12.411767757 +0000 UTC m=+2035.001023327" Nov 28 06:56:53 crc kubenswrapper[4955]: I1128 06:56:53.392962 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:56:53 crc kubenswrapper[4955]: I1128 06:56:53.393522 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:57:23 crc kubenswrapper[4955]: I1128 06:57:23.394022 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:57:23 crc kubenswrapper[4955]: I1128 06:57:23.395330 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.113229 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.115609 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.132316 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.185307 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.185385 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9v9\" (UniqueName: \"kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.185583 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.287037 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9v9\" (UniqueName: \"kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.287169 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.287236 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.287670 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.287787 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.316405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9v9\" (UniqueName: \"kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9\") pod \"community-operators-75hg4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.437524 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:29 crc kubenswrapper[4955]: I1128 06:57:29.990202 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:30 crc kubenswrapper[4955]: W1128 06:57:30.003826 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded74a165_51e7_472e_9835_215123fc3ea4.slice/crio-29e34c5341b78d6b267aa449f907652e24f87bf6320f7ac38fd08e1ae1031b6d WatchSource:0}: Error finding container 29e34c5341b78d6b267aa449f907652e24f87bf6320f7ac38fd08e1ae1031b6d: Status 404 returned error can't find the container with id 29e34c5341b78d6b267aa449f907652e24f87bf6320f7ac38fd08e1ae1031b6d Nov 28 06:57:30 crc kubenswrapper[4955]: I1128 06:57:30.814474 4955 generic.go:334] "Generic (PLEG): container finished" podID="ed74a165-51e7-472e-9835-215123fc3ea4" containerID="008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297" exitCode=0 Nov 28 06:57:30 crc kubenswrapper[4955]: I1128 06:57:30.814551 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerDied","Data":"008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297"} Nov 28 06:57:30 crc kubenswrapper[4955]: I1128 06:57:30.814581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerStarted","Data":"29e34c5341b78d6b267aa449f907652e24f87bf6320f7ac38fd08e1ae1031b6d"} Nov 28 06:57:32 crc kubenswrapper[4955]: I1128 06:57:32.833130 4955 generic.go:334] "Generic (PLEG): container finished" podID="ed74a165-51e7-472e-9835-215123fc3ea4" containerID="11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd" exitCode=0 Nov 28 06:57:32 crc kubenswrapper[4955]: I1128 06:57:32.833282 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerDied","Data":"11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd"} Nov 28 06:57:34 crc kubenswrapper[4955]: I1128 06:57:34.862412 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerStarted","Data":"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0"} Nov 28 06:57:34 crc kubenswrapper[4955]: I1128 06:57:34.894047 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-75hg4" podStartSLOduration=3.164127173 podStartE2EDuration="5.894024515s" podCreationTimestamp="2025-11-28 06:57:29 +0000 UTC" firstStartedPulling="2025-11-28 06:57:30.817845919 +0000 UTC m=+2173.407101489" lastFinishedPulling="2025-11-28 06:57:33.547743221 +0000 UTC m=+2176.136998831" observedRunningTime="2025-11-28 06:57:34.885377011 +0000 UTC m=+2177.474632681" watchObservedRunningTime="2025-11-28 06:57:34.894024515 +0000 UTC m=+2177.483280085" Nov 28 06:57:39 crc kubenswrapper[4955]: I1128 06:57:39.438757 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:39 crc kubenswrapper[4955]: I1128 06:57:39.439336 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:39 crc kubenswrapper[4955]: I1128 06:57:39.489772 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:39 crc kubenswrapper[4955]: I1128 06:57:39.974277 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:40 crc kubenswrapper[4955]: I1128 06:57:40.034805 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:41 crc kubenswrapper[4955]: I1128 06:57:41.936283 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-75hg4" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="registry-server" containerID="cri-o://8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0" gracePeriod=2 Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.472216 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.659661 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities\") pod \"ed74a165-51e7-472e-9835-215123fc3ea4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.659998 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content\") pod \"ed74a165-51e7-472e-9835-215123fc3ea4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.660054 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9v9\" (UniqueName: \"kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9\") pod \"ed74a165-51e7-472e-9835-215123fc3ea4\" (UID: \"ed74a165-51e7-472e-9835-215123fc3ea4\") " Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.660536 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities" (OuterVolumeSpecName: "utilities") pod "ed74a165-51e7-472e-9835-215123fc3ea4" (UID: "ed74a165-51e7-472e-9835-215123fc3ea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.660874 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.671752 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9" (OuterVolumeSpecName: "kube-api-access-nv9v9") pod "ed74a165-51e7-472e-9835-215123fc3ea4" (UID: "ed74a165-51e7-472e-9835-215123fc3ea4"). InnerVolumeSpecName "kube-api-access-nv9v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.713358 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed74a165-51e7-472e-9835-215123fc3ea4" (UID: "ed74a165-51e7-472e-9835-215123fc3ea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.762238 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed74a165-51e7-472e-9835-215123fc3ea4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.762276 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9v9\" (UniqueName: \"kubernetes.io/projected/ed74a165-51e7-472e-9835-215123fc3ea4-kube-api-access-nv9v9\") on node \"crc\" DevicePath \"\"" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.945699 4955 generic.go:334] "Generic (PLEG): container finished" podID="ed74a165-51e7-472e-9835-215123fc3ea4" containerID="8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0" exitCode=0 Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.945745 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerDied","Data":"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0"} Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.945774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75hg4" event={"ID":"ed74a165-51e7-472e-9835-215123fc3ea4","Type":"ContainerDied","Data":"29e34c5341b78d6b267aa449f907652e24f87bf6320f7ac38fd08e1ae1031b6d"} Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.945790 4955 scope.go:117] "RemoveContainer" containerID="8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.945913 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75hg4" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.973072 4955 scope.go:117] "RemoveContainer" containerID="11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd" Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.986855 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:42 crc kubenswrapper[4955]: I1128 06:57:42.996698 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-75hg4"] Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.019993 4955 scope.go:117] "RemoveContainer" containerID="008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.038911 4955 scope.go:117] "RemoveContainer" containerID="8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0" Nov 28 06:57:43 crc kubenswrapper[4955]: E1128 06:57:43.039345 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0\": container with ID starting with 8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0 not found: ID does not exist" containerID="8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.039394 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0"} err="failed to get container status \"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0\": rpc error: code = NotFound desc = could not find container \"8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0\": container with ID starting with 8f1047b6b8caedff2e20b1101ada48c985ad52e8fac2917673a8490bd8a555c0 not found: ID does not exist" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.039422 4955 scope.go:117] "RemoveContainer" containerID="11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd" Nov 28 06:57:43 crc kubenswrapper[4955]: E1128 06:57:43.040157 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd\": container with ID starting with 11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd not found: ID does not exist" containerID="11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.040199 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd"} err="failed to get container status \"11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd\": rpc error: code = NotFound desc = could not find container \"11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd\": container with ID starting with 11e998b791f2453e858d49278f134ee9ad211266c8e93b904b1db9c8ec3dadcd not found: ID does not exist" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.040228 4955 scope.go:117] "RemoveContainer" containerID="008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297" Nov 28 06:57:43 crc kubenswrapper[4955]: E1128 06:57:43.040557 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297\": container with ID starting with 008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297 not found: ID does not exist" containerID="008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.040592 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297"} err="failed to get container status \"008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297\": rpc error: code = NotFound desc = could not find container \"008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297\": container with ID starting with 008a8de8bdf48f595ae81799ca48827b27beaec818e83d19c15ab681cf350297 not found: ID does not exist" Nov 28 06:57:43 crc kubenswrapper[4955]: I1128 06:57:43.716870 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" path="/var/lib/kubelet/pods/ed74a165-51e7-472e-9835-215123fc3ea4/volumes" Nov 28 06:57:53 crc kubenswrapper[4955]: I1128 06:57:53.393328 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 06:57:53 crc kubenswrapper[4955]: I1128 06:57:53.394236 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 06:57:53 crc kubenswrapper[4955]: I1128 06:57:53.394324 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 06:57:53 crc kubenswrapper[4955]: I1128 06:57:53.395295 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 06:57:53 crc kubenswrapper[4955]: I1128 06:57:53.395394 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" gracePeriod=600 Nov 28 06:57:53 crc kubenswrapper[4955]: E1128 06:57:53.698673 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:57:54 crc kubenswrapper[4955]: I1128 06:57:54.078341 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" exitCode=0 Nov 28 06:57:54 crc kubenswrapper[4955]: I1128 06:57:54.078410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38"} Nov 28 06:57:54 crc kubenswrapper[4955]: I1128 06:57:54.078486 4955 scope.go:117] "RemoveContainer" containerID="697d772b58f553ce8a010e24cb26d55496ee8b360eb534706abc2c2bd347e32c" Nov 28 06:57:54 crc kubenswrapper[4955]: I1128 06:57:54.079496 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:57:54 crc kubenswrapper[4955]: E1128 06:57:54.080301 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:58:09 crc kubenswrapper[4955]: I1128 06:58:09.704740 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:58:09 crc kubenswrapper[4955]: E1128 06:58:09.705836 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:58:20 crc kubenswrapper[4955]: I1128 06:58:20.704208 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:58:20 crc kubenswrapper[4955]: E1128 06:58:20.704914 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:58:35 crc kubenswrapper[4955]: I1128 06:58:35.704833 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:58:35 crc kubenswrapper[4955]: E1128 06:58:35.705596 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:58:49 crc kubenswrapper[4955]: I1128 06:58:49.705388 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:58:49 crc kubenswrapper[4955]: E1128 06:58:49.706235 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.319447 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:58:52 crc kubenswrapper[4955]: E1128 06:58:52.322050 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="registry-server" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.322086 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="registry-server" Nov 28 06:58:52 crc kubenswrapper[4955]: E1128 06:58:52.322116 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="extract-content" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.322127 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="extract-content" Nov 28 06:58:52 crc kubenswrapper[4955]: E1128 06:58:52.322161 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="extract-utilities" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.322172 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="extract-utilities" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.322462 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed74a165-51e7-472e-9835-215123fc3ea4" containerName="registry-server" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.324416 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.342039 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.398403 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.398450 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wp5k\" (UniqueName: \"kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.398583 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.500595 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.500691 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.500720 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wp5k\" (UniqueName: \"kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.502751 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.503263 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.524806 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wp5k\" (UniqueName: \"kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k\") pod \"redhat-operators-h7tc6\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:52 crc kubenswrapper[4955]: I1128 06:58:52.688916 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:58:53 crc kubenswrapper[4955]: I1128 06:58:53.175961 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:58:53 crc kubenswrapper[4955]: I1128 06:58:53.792822 4955 generic.go:334] "Generic (PLEG): container finished" podID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerID="a854c992e65ba62abfe5f9b3ed85b1b3cdc18187cc777ae93958e027aa745d5b" exitCode=0 Nov 28 06:58:53 crc kubenswrapper[4955]: I1128 06:58:53.793045 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerDied","Data":"a854c992e65ba62abfe5f9b3ed85b1b3cdc18187cc777ae93958e027aa745d5b"} Nov 28 06:58:53 crc kubenswrapper[4955]: I1128 06:58:53.793119 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerStarted","Data":"cdb0688744a71f6c9c4e315f13a7fb8b97f6d61e85f7541cee52234b851c8752"} Nov 28 06:58:54 crc kubenswrapper[4955]: I1128 06:58:54.810752 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerStarted","Data":"de43545f4a82510a6cef74d5a58a1a765b731b99f160ece3bea30a260c5ade43"} Nov 28 06:58:55 crc kubenswrapper[4955]: I1128 06:58:55.825483 4955 generic.go:334] "Generic (PLEG): container finished" podID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerID="de43545f4a82510a6cef74d5a58a1a765b731b99f160ece3bea30a260c5ade43" exitCode=0 Nov 28 06:58:55 crc kubenswrapper[4955]: I1128 06:58:55.825569 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerDied","Data":"de43545f4a82510a6cef74d5a58a1a765b731b99f160ece3bea30a260c5ade43"} Nov 28 06:58:56 crc kubenswrapper[4955]: I1128 06:58:56.843299 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerStarted","Data":"d435a41a298fca96b8a306017e2557ecc3d72e80717fdc4646ff4879df33ed14"} Nov 28 06:58:56 crc kubenswrapper[4955]: I1128 06:58:56.876632 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h7tc6" podStartSLOduration=2.226926659 podStartE2EDuration="4.87661125s" podCreationTimestamp="2025-11-28 06:58:52 +0000 UTC" firstStartedPulling="2025-11-28 06:58:53.795903694 +0000 UTC m=+2256.385159284" lastFinishedPulling="2025-11-28 06:58:56.445588285 +0000 UTC m=+2259.034843875" observedRunningTime="2025-11-28 06:58:56.864173459 +0000 UTC m=+2259.453429079" watchObservedRunningTime="2025-11-28 06:58:56.87661125 +0000 UTC m=+2259.465866830" Nov 28 06:59:00 crc kubenswrapper[4955]: I1128 06:59:00.704826 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:59:00 crc kubenswrapper[4955]: E1128 06:59:00.705908 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.686862 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.689250 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.701796 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.803408 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vns4s\" (UniqueName: \"kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.803538 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.803584 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.906187 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.906358 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.906638 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.906995 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.907334 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vns4s\" (UniqueName: \"kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:01 crc kubenswrapper[4955]: I1128 06:59:01.940163 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vns4s\" (UniqueName: \"kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s\") pod \"certified-operators-dclfc\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.071474 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.580069 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.689437 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.689529 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.905143 4955 generic.go:334] "Generic (PLEG): container finished" podID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerID="8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7" exitCode=0 Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.905196 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerDied","Data":"8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7"} Nov 28 06:59:02 crc kubenswrapper[4955]: I1128 06:59:02.905431 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerStarted","Data":"3210c0c11c17f50ae190b6425021e6387aa7a33dc26c2793be8a5cf209650104"} Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.498873 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvv7h"] Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.502036 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.521072 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvv7h"] Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.541915 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhrrc\" (UniqueName: \"kubernetes.io/projected/f1490061-8d25-458d-825b-2006937f9b62-kube-api-access-bhrrc\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.542001 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-utilities\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.542029 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-catalog-content\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.643999 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhrrc\" (UniqueName: \"kubernetes.io/projected/f1490061-8d25-458d-825b-2006937f9b62-kube-api-access-bhrrc\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.644330 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-utilities\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.644482 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-catalog-content\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.645045 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-catalog-content\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.645072 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1490061-8d25-458d-825b-2006937f9b62-utilities\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.668084 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhrrc\" (UniqueName: \"kubernetes.io/projected/f1490061-8d25-458d-825b-2006937f9b62-kube-api-access-bhrrc\") pod \"redhat-marketplace-mvv7h\" (UID: \"f1490061-8d25-458d-825b-2006937f9b62\") " pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.741535 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h7tc6" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="registry-server" probeResult="failure" output=< Nov 28 06:59:03 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 06:59:03 crc kubenswrapper[4955]: > Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.839641 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:03 crc kubenswrapper[4955]: I1128 06:59:03.924933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerStarted","Data":"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3"} Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.336658 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvv7h"] Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.942967 4955 generic.go:334] "Generic (PLEG): container finished" podID="f1490061-8d25-458d-825b-2006937f9b62" containerID="8642dbdb3464aef33fb567453e335fbdde889d626d532bb0d0de6bfd7619f899" exitCode=0 Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.943030 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvv7h" event={"ID":"f1490061-8d25-458d-825b-2006937f9b62","Type":"ContainerDied","Data":"8642dbdb3464aef33fb567453e335fbdde889d626d532bb0d0de6bfd7619f899"} Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.943410 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvv7h" event={"ID":"f1490061-8d25-458d-825b-2006937f9b62","Type":"ContainerStarted","Data":"61001c39895900a7db8f5dc6c1ddf0e0f0ae5d660eb8856313e32350ff2a984e"} Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.948765 4955 generic.go:334] "Generic (PLEG): container finished" podID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerID="d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3" exitCode=0 Nov 28 06:59:04 crc kubenswrapper[4955]: I1128 06:59:04.948816 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerDied","Data":"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3"} Nov 28 06:59:05 crc kubenswrapper[4955]: I1128 06:59:05.960261 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerStarted","Data":"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe"} Nov 28 06:59:05 crc kubenswrapper[4955]: I1128 06:59:05.976885 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dclfc" podStartSLOduration=2.320221148 podStartE2EDuration="4.976863771s" podCreationTimestamp="2025-11-28 06:59:01 +0000 UTC" firstStartedPulling="2025-11-28 06:59:02.907250502 +0000 UTC m=+2265.496506072" lastFinishedPulling="2025-11-28 06:59:05.563893085 +0000 UTC m=+2268.153148695" observedRunningTime="2025-11-28 06:59:05.976164021 +0000 UTC m=+2268.565419611" watchObservedRunningTime="2025-11-28 06:59:05.976863771 +0000 UTC m=+2268.566119341" Nov 28 06:59:08 crc kubenswrapper[4955]: I1128 06:59:08.992643 4955 generic.go:334] "Generic (PLEG): container finished" podID="f1490061-8d25-458d-825b-2006937f9b62" containerID="7c60b734d3a9994f29b980807e0f6007fd50bd222c2109a68e443ad35c5092bc" exitCode=0 Nov 28 06:59:08 crc kubenswrapper[4955]: I1128 06:59:08.992714 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvv7h" event={"ID":"f1490061-8d25-458d-825b-2006937f9b62","Type":"ContainerDied","Data":"7c60b734d3a9994f29b980807e0f6007fd50bd222c2109a68e443ad35c5092bc"} Nov 28 06:59:10 crc kubenswrapper[4955]: I1128 06:59:10.005545 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvv7h" event={"ID":"f1490061-8d25-458d-825b-2006937f9b62","Type":"ContainerStarted","Data":"38701257b9409f0a7034ac01b439e996d0ef7f02edbc880f8070e5e2c1fab925"} Nov 28 06:59:10 crc kubenswrapper[4955]: I1128 06:59:10.029864 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvv7h" podStartSLOduration=2.502553243 podStartE2EDuration="7.029845885s" podCreationTimestamp="2025-11-28 06:59:03 +0000 UTC" firstStartedPulling="2025-11-28 06:59:04.947172018 +0000 UTC m=+2267.536427608" lastFinishedPulling="2025-11-28 06:59:09.47446464 +0000 UTC m=+2272.063720250" observedRunningTime="2025-11-28 06:59:10.026630983 +0000 UTC m=+2272.615886553" watchObservedRunningTime="2025-11-28 06:59:10.029845885 +0000 UTC m=+2272.619101465" Nov 28 06:59:11 crc kubenswrapper[4955]: I1128 06:59:11.704825 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:59:11 crc kubenswrapper[4955]: E1128 06:59:11.705471 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:59:12 crc kubenswrapper[4955]: I1128 06:59:12.071957 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:12 crc kubenswrapper[4955]: I1128 06:59:12.072045 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:12 crc kubenswrapper[4955]: I1128 06:59:12.143981 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:12 crc kubenswrapper[4955]: I1128 06:59:12.752705 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:12 crc kubenswrapper[4955]: I1128 06:59:12.812291 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:13 crc kubenswrapper[4955]: I1128 06:59:13.103866 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:13 crc kubenswrapper[4955]: I1128 06:59:13.593899 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:59:13 crc kubenswrapper[4955]: I1128 06:59:13.840063 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:13 crc kubenswrapper[4955]: I1128 06:59:13.840113 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:13 crc kubenswrapper[4955]: I1128 06:59:13.907093 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:14 crc kubenswrapper[4955]: I1128 06:59:14.044803 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h7tc6" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="registry-server" containerID="cri-o://d435a41a298fca96b8a306017e2557ecc3d72e80717fdc4646ff4879df33ed14" gracePeriod=2 Nov 28 06:59:14 crc kubenswrapper[4955]: I1128 06:59:14.115241 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvv7h" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.056821 4955 generic.go:334] "Generic (PLEG): container finished" podID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerID="d435a41a298fca96b8a306017e2557ecc3d72e80717fdc4646ff4879df33ed14" exitCode=0 Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.056878 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerDied","Data":"d435a41a298fca96b8a306017e2557ecc3d72e80717fdc4646ff4879df33ed14"} Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.057907 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h7tc6" event={"ID":"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0","Type":"ContainerDied","Data":"cdb0688744a71f6c9c4e315f13a7fb8b97f6d61e85f7541cee52234b851c8752"} Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.057918 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb0688744a71f6c9c4e315f13a7fb8b97f6d61e85f7541cee52234b851c8752" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.075387 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.206137 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wp5k\" (UniqueName: \"kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k\") pod \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.206332 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content\") pod \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.206583 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities\") pod \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\" (UID: \"58ed68ac-28d1-4de9-a31e-4f2cac2a67d0\") " Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.208989 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities" (OuterVolumeSpecName: "utilities") pod "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" (UID: "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.218542 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k" (OuterVolumeSpecName: "kube-api-access-2wp5k") pod "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" (UID: "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0"). InnerVolumeSpecName "kube-api-access-2wp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.309462 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wp5k\" (UniqueName: \"kubernetes.io/projected/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-kube-api-access-2wp5k\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.309535 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.356242 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" (UID: "58ed68ac-28d1-4de9-a31e-4f2cac2a67d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.398749 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.399117 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dclfc" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="registry-server" containerID="cri-o://1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe" gracePeriod=2 Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.411178 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:15 crc kubenswrapper[4955]: I1128 06:59:15.837050 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:15 crc kubenswrapper[4955]: E1128 06:59:15.934461 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ed68ac_28d1_4de9_a31e_4f2cac2a67d0.slice/crio-cdb0688744a71f6c9c4e315f13a7fb8b97f6d61e85f7541cee52234b851c8752\": RecentStats: unable to find data in memory cache]" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.030375 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vns4s\" (UniqueName: \"kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s\") pod \"d81329a8-aa04-4ada-8f54-44db4feba2d9\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.030839 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities\") pod \"d81329a8-aa04-4ada-8f54-44db4feba2d9\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.030880 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content\") pod \"d81329a8-aa04-4ada-8f54-44db4feba2d9\" (UID: \"d81329a8-aa04-4ada-8f54-44db4feba2d9\") " Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.031859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities" (OuterVolumeSpecName: "utilities") pod "d81329a8-aa04-4ada-8f54-44db4feba2d9" (UID: "d81329a8-aa04-4ada-8f54-44db4feba2d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.039654 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s" (OuterVolumeSpecName: "kube-api-access-vns4s") pod "d81329a8-aa04-4ada-8f54-44db4feba2d9" (UID: "d81329a8-aa04-4ada-8f54-44db4feba2d9"). InnerVolumeSpecName "kube-api-access-vns4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069239 4955 generic.go:334] "Generic (PLEG): container finished" podID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerID="1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe" exitCode=0 Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069305 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerDied","Data":"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe"} Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069369 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h7tc6" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069379 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dclfc" event={"ID":"d81329a8-aa04-4ada-8f54-44db4feba2d9","Type":"ContainerDied","Data":"3210c0c11c17f50ae190b6425021e6387aa7a33dc26c2793be8a5cf209650104"} Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069405 4955 scope.go:117] "RemoveContainer" containerID="1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.069715 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dclfc" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.078232 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d81329a8-aa04-4ada-8f54-44db4feba2d9" (UID: "d81329a8-aa04-4ada-8f54-44db4feba2d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.100587 4955 scope.go:117] "RemoveContainer" containerID="d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.100948 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.107785 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h7tc6"] Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.120471 4955 scope.go:117] "RemoveContainer" containerID="8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.132824 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.132863 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d81329a8-aa04-4ada-8f54-44db4feba2d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.132878 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vns4s\" (UniqueName: \"kubernetes.io/projected/d81329a8-aa04-4ada-8f54-44db4feba2d9-kube-api-access-vns4s\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.141384 4955 scope.go:117] "RemoveContainer" containerID="1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe" Nov 28 06:59:16 crc kubenswrapper[4955]: E1128 06:59:16.141874 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe\": container with ID starting with 1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe not found: ID does not exist" containerID="1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.141917 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe"} err="failed to get container status \"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe\": rpc error: code = NotFound desc = could not find container \"1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe\": container with ID starting with 1055d87a6e7a7cd9a70677d7a7ccf969a26b120c88416a7567894e2e177be7fe not found: ID does not exist" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.141938 4955 scope.go:117] "RemoveContainer" containerID="d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3" Nov 28 06:59:16 crc kubenswrapper[4955]: E1128 06:59:16.142239 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3\": container with ID starting with d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3 not found: ID does not exist" containerID="d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.142266 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3"} err="failed to get container status \"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3\": rpc error: code = NotFound desc = could not find container \"d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3\": container with ID starting with d27fdace85ce6923d1e10cea0b680280989eef8c52543c7caa36be8cb30e72c3 not found: ID does not exist" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.142278 4955 scope.go:117] "RemoveContainer" containerID="8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7" Nov 28 06:59:16 crc kubenswrapper[4955]: E1128 06:59:16.142589 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7\": container with ID starting with 8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7 not found: ID does not exist" containerID="8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.142635 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7"} err="failed to get container status \"8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7\": rpc error: code = NotFound desc = could not find container \"8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7\": container with ID starting with 8b52e4d49bd33e91856b6b26c0118473e54eda7e591516b7fa0dd68948e7e1c7 not found: ID does not exist" Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.434679 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:16 crc kubenswrapper[4955]: I1128 06:59:16.444363 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dclfc"] Nov 28 06:59:17 crc kubenswrapper[4955]: I1128 06:59:17.416070 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvv7h"] Nov 28 06:59:17 crc kubenswrapper[4955]: I1128 06:59:17.719525 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" path="/var/lib/kubelet/pods/58ed68ac-28d1-4de9-a31e-4f2cac2a67d0/volumes" Nov 28 06:59:17 crc kubenswrapper[4955]: I1128 06:59:17.720433 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" path="/var/lib/kubelet/pods/d81329a8-aa04-4ada-8f54-44db4feba2d9/volumes" Nov 28 06:59:17 crc kubenswrapper[4955]: I1128 06:59:17.791577 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:59:17 crc kubenswrapper[4955]: I1128 06:59:17.791848 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kbjwb" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="registry-server" containerID="cri-o://8a588daa6fb2aec840fdc404fd0ee7b7e2058d4fc258e37cd5858cad283865fc" gracePeriod=2 Nov 28 06:59:18 crc kubenswrapper[4955]: I1128 06:59:18.096575 4955 generic.go:334] "Generic (PLEG): container finished" podID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerID="8a588daa6fb2aec840fdc404fd0ee7b7e2058d4fc258e37cd5858cad283865fc" exitCode=0 Nov 28 06:59:18 crc kubenswrapper[4955]: I1128 06:59:18.096616 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerDied","Data":"8a588daa6fb2aec840fdc404fd0ee7b7e2058d4fc258e37cd5858cad283865fc"} Nov 28 06:59:18 crc kubenswrapper[4955]: I1128 06:59:18.953126 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.089063 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities\") pod \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.089244 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content\") pod \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.089276 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lccv9\" (UniqueName: \"kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9\") pod \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\" (UID: \"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95\") " Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.089656 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities" (OuterVolumeSpecName: "utilities") pod "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" (UID: "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.089813 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.094811 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9" (OuterVolumeSpecName: "kube-api-access-lccv9") pod "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" (UID: "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95"). InnerVolumeSpecName "kube-api-access-lccv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.105333 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" (UID: "2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.108167 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbjwb" event={"ID":"2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95","Type":"ContainerDied","Data":"fec9ca396a7b90d85539800731ae28a40ce1b62bf22f09ed23caa1566ceeff2d"} Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.108210 4955 scope.go:117] "RemoveContainer" containerID="8a588daa6fb2aec840fdc404fd0ee7b7e2058d4fc258e37cd5858cad283865fc" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.108329 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbjwb" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.156494 4955 scope.go:117] "RemoveContainer" containerID="532711fe15c4265bf78f1d7b5ea5fa2eff9ef76aa2e009826cdee9890e61010d" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.164520 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.174828 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbjwb"] Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.193985 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lccv9\" (UniqueName: \"kubernetes.io/projected/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-kube-api-access-lccv9\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.194027 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.198397 4955 scope.go:117] "RemoveContainer" containerID="aa1a56754d23dc0389691b0125402cc5af62d8630a9a6934d6d5bd1cf08694f3" Nov 28 06:59:19 crc kubenswrapper[4955]: I1128 06:59:19.713685 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" path="/var/lib/kubelet/pods/2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95/volumes" Nov 28 06:59:24 crc kubenswrapper[4955]: I1128 06:59:24.704873 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:59:24 crc kubenswrapper[4955]: E1128 06:59:24.706869 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:59:36 crc kubenswrapper[4955]: I1128 06:59:36.303771 4955 generic.go:334] "Generic (PLEG): container finished" podID="845c1878-1788-4409-bbd8-a76a2f3eed71" containerID="7815f0b008a4252cb4e6c672eb4bb88e43cbf74ec153d32f4cd407f58088cbad" exitCode=0 Nov 28 06:59:36 crc kubenswrapper[4955]: I1128 06:59:36.303859 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" event={"ID":"845c1878-1788-4409-bbd8-a76a2f3eed71","Type":"ContainerDied","Data":"7815f0b008a4252cb4e6c672eb4bb88e43cbf74ec153d32f4cd407f58088cbad"} Nov 28 06:59:36 crc kubenswrapper[4955]: I1128 06:59:36.705439 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:59:36 crc kubenswrapper[4955]: E1128 06:59:36.706319 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.832330 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.890628 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle\") pod \"845c1878-1788-4409-bbd8-a76a2f3eed71\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.890690 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key\") pod \"845c1878-1788-4409-bbd8-a76a2f3eed71\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.890731 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory\") pod \"845c1878-1788-4409-bbd8-a76a2f3eed71\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.890944 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-246jm\" (UniqueName: \"kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm\") pod \"845c1878-1788-4409-bbd8-a76a2f3eed71\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.890979 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0\") pod \"845c1878-1788-4409-bbd8-a76a2f3eed71\" (UID: \"845c1878-1788-4409-bbd8-a76a2f3eed71\") " Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.899684 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm" (OuterVolumeSpecName: "kube-api-access-246jm") pod "845c1878-1788-4409-bbd8-a76a2f3eed71" (UID: "845c1878-1788-4409-bbd8-a76a2f3eed71"). InnerVolumeSpecName "kube-api-access-246jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.899837 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "845c1878-1788-4409-bbd8-a76a2f3eed71" (UID: "845c1878-1788-4409-bbd8-a76a2f3eed71"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.926078 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "845c1878-1788-4409-bbd8-a76a2f3eed71" (UID: "845c1878-1788-4409-bbd8-a76a2f3eed71"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.936982 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "845c1878-1788-4409-bbd8-a76a2f3eed71" (UID: "845c1878-1788-4409-bbd8-a76a2f3eed71"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.945452 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory" (OuterVolumeSpecName: "inventory") pod "845c1878-1788-4409-bbd8-a76a2f3eed71" (UID: "845c1878-1788-4409-bbd8-a76a2f3eed71"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.992134 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-246jm\" (UniqueName: \"kubernetes.io/projected/845c1878-1788-4409-bbd8-a76a2f3eed71-kube-api-access-246jm\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.992173 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.992185 4955 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.992196 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:37 crc kubenswrapper[4955]: I1128 06:59:37.992205 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/845c1878-1788-4409-bbd8-a76a2f3eed71-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.326483 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" event={"ID":"845c1878-1788-4409-bbd8-a76a2f3eed71","Type":"ContainerDied","Data":"fb27768bbed29b5848e617644bad3a7de743106755724c1ea892b4f764aa2a3d"} Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.326597 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb27768bbed29b5848e617644bad3a7de743106755724c1ea892b4f764aa2a3d" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.326624 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.496188 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m"] Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.496890 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.496910 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.496933 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.496942 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.496959 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.496967 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.496983 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.496991 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497002 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497010 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497038 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497045 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="extract-content" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497059 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497067 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497082 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497090 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497107 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c1878-1788-4409-bbd8-a76a2f3eed71" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497116 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c1878-1788-4409-bbd8-a76a2f3eed71" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 06:59:38 crc kubenswrapper[4955]: E1128 06:59:38.497132 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497140 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="extract-utilities" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497366 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d81329a8-aa04-4ada-8f54-44db4feba2d9" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497381 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ee1e24f-6b2f-4b13-b8a4-7f9a24de7e95" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497404 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ed68ac-28d1-4de9-a31e-4f2cac2a67d0" containerName="registry-server" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.497417 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c1878-1788-4409-bbd8-a76a2f3eed71" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.498223 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.502048 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.502551 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.502605 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.502848 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.502916 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.503720 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.503846 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.503998 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cszb\" (UniqueName: \"kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504071 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504354 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504454 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504592 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504688 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.504776 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.509820 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.510085 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.543834 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m"] Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606308 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606365 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606438 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606460 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cszb\" (UniqueName: \"kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606493 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606547 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606588 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.606636 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.607921 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.617405 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.618077 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.618221 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.625137 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.626446 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.626595 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.626665 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.638170 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cszb\" (UniqueName: \"kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb\") pod \"nova-edpm-deployment-openstack-edpm-ipam-db78m\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:38 crc kubenswrapper[4955]: I1128 06:59:38.823796 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 06:59:39 crc kubenswrapper[4955]: I1128 06:59:39.406473 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m"] Nov 28 06:59:39 crc kubenswrapper[4955]: I1128 06:59:39.408371 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 06:59:40 crc kubenswrapper[4955]: I1128 06:59:40.351154 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" event={"ID":"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39","Type":"ContainerStarted","Data":"34dab8562bf3b1f9a86c4c4fbbd7b70316b1705ccbc82093590b6e9902d688e8"} Nov 28 06:59:41 crc kubenswrapper[4955]: I1128 06:59:41.361404 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" event={"ID":"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39","Type":"ContainerStarted","Data":"1d0266ab5a3991b465a2af43271c3886b044ffc1a0507955a22d3ffbabdbfa6e"} Nov 28 06:59:41 crc kubenswrapper[4955]: I1128 06:59:41.392963 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" podStartSLOduration=2.637305558 podStartE2EDuration="3.392942592s" podCreationTimestamp="2025-11-28 06:59:38 +0000 UTC" firstStartedPulling="2025-11-28 06:59:39.408123136 +0000 UTC m=+2301.997378706" lastFinishedPulling="2025-11-28 06:59:40.16376014 +0000 UTC m=+2302.753015740" observedRunningTime="2025-11-28 06:59:41.390110611 +0000 UTC m=+2303.979366211" watchObservedRunningTime="2025-11-28 06:59:41.392942592 +0000 UTC m=+2303.982198182" Nov 28 06:59:49 crc kubenswrapper[4955]: I1128 06:59:49.705256 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 06:59:49 crc kubenswrapper[4955]: E1128 06:59:49.706537 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.168249 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q"] Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.171562 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.173855 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.177796 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.180581 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q"] Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.349419 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.349527 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.349621 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tf59\" (UniqueName: \"kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.451653 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.451742 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tf59\" (UniqueName: \"kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.451877 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.452942 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.472941 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.483020 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tf59\" (UniqueName: \"kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59\") pod \"collect-profiles-29405220-9qp5q\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.504759 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:00 crc kubenswrapper[4955]: I1128 07:00:00.956956 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q"] Nov 28 07:00:01 crc kubenswrapper[4955]: I1128 07:00:01.588731 4955 generic.go:334] "Generic (PLEG): container finished" podID="8d59599a-7a7b-41bb-9c65-ec7903ce914a" containerID="be523a6e24f333511e9b89e61cb0520396af6a78db134477276dec7fdefc4c2f" exitCode=0 Nov 28 07:00:01 crc kubenswrapper[4955]: I1128 07:00:01.588795 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" event={"ID":"8d59599a-7a7b-41bb-9c65-ec7903ce914a","Type":"ContainerDied","Data":"be523a6e24f333511e9b89e61cb0520396af6a78db134477276dec7fdefc4c2f"} Nov 28 07:00:01 crc kubenswrapper[4955]: I1128 07:00:01.589250 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" event={"ID":"8d59599a-7a7b-41bb-9c65-ec7903ce914a","Type":"ContainerStarted","Data":"c397f27cac7cceb995e3dfaf470bee14c820c55821d3d0374623c39be3294ae7"} Nov 28 07:00:02 crc kubenswrapper[4955]: I1128 07:00:02.962343 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.106475 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume\") pod \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.106641 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume\") pod \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.106720 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tf59\" (UniqueName: \"kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59\") pod \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\" (UID: \"8d59599a-7a7b-41bb-9c65-ec7903ce914a\") " Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.107266 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d59599a-7a7b-41bb-9c65-ec7903ce914a" (UID: "8d59599a-7a7b-41bb-9c65-ec7903ce914a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.113989 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59" (OuterVolumeSpecName: "kube-api-access-9tf59") pod "8d59599a-7a7b-41bb-9c65-ec7903ce914a" (UID: "8d59599a-7a7b-41bb-9c65-ec7903ce914a"). InnerVolumeSpecName "kube-api-access-9tf59". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.114073 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d59599a-7a7b-41bb-9c65-ec7903ce914a" (UID: "8d59599a-7a7b-41bb-9c65-ec7903ce914a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.209692 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tf59\" (UniqueName: \"kubernetes.io/projected/8d59599a-7a7b-41bb-9c65-ec7903ce914a-kube-api-access-9tf59\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.209758 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d59599a-7a7b-41bb-9c65-ec7903ce914a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.209787 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d59599a-7a7b-41bb-9c65-ec7903ce914a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.609209 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" event={"ID":"8d59599a-7a7b-41bb-9c65-ec7903ce914a","Type":"ContainerDied","Data":"c397f27cac7cceb995e3dfaf470bee14c820c55821d3d0374623c39be3294ae7"} Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.609826 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c397f27cac7cceb995e3dfaf470bee14c820c55821d3d0374623c39be3294ae7" Nov 28 07:00:03 crc kubenswrapper[4955]: I1128 07:00:03.609934 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405220-9qp5q" Nov 28 07:00:04 crc kubenswrapper[4955]: I1128 07:00:04.063552 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm"] Nov 28 07:00:04 crc kubenswrapper[4955]: I1128 07:00:04.074233 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405175-fppsm"] Nov 28 07:00:04 crc kubenswrapper[4955]: I1128 07:00:04.704627 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:00:04 crc kubenswrapper[4955]: E1128 07:00:04.705002 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:00:05 crc kubenswrapper[4955]: I1128 07:00:05.722086 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae90aa07-e0e4-47ea-8297-449220260a93" path="/var/lib/kubelet/pods/ae90aa07-e0e4-47ea-8297-449220260a93/volumes" Nov 28 07:00:17 crc kubenswrapper[4955]: I1128 07:00:17.719016 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:00:17 crc kubenswrapper[4955]: E1128 07:00:17.720682 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:00:27 crc kubenswrapper[4955]: I1128 07:00:27.703304 4955 scope.go:117] "RemoveContainer" containerID="d58e07e7aa5880fe29bcdd12e01a062acf7c7b39a2505f151cf4358d495541a5" Nov 28 07:00:30 crc kubenswrapper[4955]: I1128 07:00:30.705118 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:00:30 crc kubenswrapper[4955]: E1128 07:00:30.706369 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:00:45 crc kubenswrapper[4955]: I1128 07:00:45.704342 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:00:45 crc kubenswrapper[4955]: E1128 07:00:45.707037 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.160203 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405221-4mmm4"] Nov 28 07:01:00 crc kubenswrapper[4955]: E1128 07:01:00.173169 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d59599a-7a7b-41bb-9c65-ec7903ce914a" containerName="collect-profiles" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.173188 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d59599a-7a7b-41bb-9c65-ec7903ce914a" containerName="collect-profiles" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.173411 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d59599a-7a7b-41bb-9c65-ec7903ce914a" containerName="collect-profiles" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.174148 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.174437 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405221-4mmm4"] Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.320491 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.320582 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggl4\" (UniqueName: \"kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.320607 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.321006 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.423209 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.424728 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggl4\" (UniqueName: \"kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.424768 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.424975 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.432813 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.437252 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.442418 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.447931 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggl4\" (UniqueName: \"kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4\") pod \"keystone-cron-29405221-4mmm4\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.499921 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.704283 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:01:00 crc kubenswrapper[4955]: E1128 07:01:00.704876 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:01:00 crc kubenswrapper[4955]: I1128 07:01:00.940688 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405221-4mmm4"] Nov 28 07:01:01 crc kubenswrapper[4955]: I1128 07:01:01.319271 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405221-4mmm4" event={"ID":"0b8ffa87-03b1-4df9-a491-15db50f8a75e","Type":"ContainerStarted","Data":"56ab8643a25d8ee274b673ff0a48cbadfb3a02125e6a8ebff6c6ab81ba82829a"} Nov 28 07:01:01 crc kubenswrapper[4955]: I1128 07:01:01.319613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405221-4mmm4" event={"ID":"0b8ffa87-03b1-4df9-a491-15db50f8a75e","Type":"ContainerStarted","Data":"bb123ce902f59103c5ea1c3fd8e633c2bd1c65aea832b0a8e3334fb917b23f82"} Nov 28 07:01:01 crc kubenswrapper[4955]: I1128 07:01:01.339598 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29405221-4mmm4" podStartSLOduration=1.339582723 podStartE2EDuration="1.339582723s" podCreationTimestamp="2025-11-28 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:01:01.332795599 +0000 UTC m=+2383.922051169" watchObservedRunningTime="2025-11-28 07:01:01.339582723 +0000 UTC m=+2383.928838293" Nov 28 07:01:03 crc kubenswrapper[4955]: I1128 07:01:03.337858 4955 generic.go:334] "Generic (PLEG): container finished" podID="0b8ffa87-03b1-4df9-a491-15db50f8a75e" containerID="56ab8643a25d8ee274b673ff0a48cbadfb3a02125e6a8ebff6c6ab81ba82829a" exitCode=0 Nov 28 07:01:03 crc kubenswrapper[4955]: I1128 07:01:03.337950 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405221-4mmm4" event={"ID":"0b8ffa87-03b1-4df9-a491-15db50f8a75e","Type":"ContainerDied","Data":"56ab8643a25d8ee274b673ff0a48cbadfb3a02125e6a8ebff6c6ab81ba82829a"} Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.761664 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.929227 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys\") pod \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.929427 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data\") pod \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.929580 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggl4\" (UniqueName: \"kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4\") pod \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.929646 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle\") pod \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\" (UID: \"0b8ffa87-03b1-4df9-a491-15db50f8a75e\") " Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.946948 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b8ffa87-03b1-4df9-a491-15db50f8a75e" (UID: "0b8ffa87-03b1-4df9-a491-15db50f8a75e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.947867 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4" (OuterVolumeSpecName: "kube-api-access-zggl4") pod "0b8ffa87-03b1-4df9-a491-15db50f8a75e" (UID: "0b8ffa87-03b1-4df9-a491-15db50f8a75e"). InnerVolumeSpecName "kube-api-access-zggl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.967692 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b8ffa87-03b1-4df9-a491-15db50f8a75e" (UID: "0b8ffa87-03b1-4df9-a491-15db50f8a75e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:01:04 crc kubenswrapper[4955]: I1128 07:01:04.997926 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data" (OuterVolumeSpecName: "config-data") pod "0b8ffa87-03b1-4df9-a491-15db50f8a75e" (UID: "0b8ffa87-03b1-4df9-a491-15db50f8a75e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.031971 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.032016 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggl4\" (UniqueName: \"kubernetes.io/projected/0b8ffa87-03b1-4df9-a491-15db50f8a75e-kube-api-access-zggl4\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.032033 4955 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.032045 4955 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b8ffa87-03b1-4df9-a491-15db50f8a75e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.357128 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405221-4mmm4" event={"ID":"0b8ffa87-03b1-4df9-a491-15db50f8a75e","Type":"ContainerDied","Data":"bb123ce902f59103c5ea1c3fd8e633c2bd1c65aea832b0a8e3334fb917b23f82"} Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.357162 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb123ce902f59103c5ea1c3fd8e633c2bd1c65aea832b0a8e3334fb917b23f82" Nov 28 07:01:05 crc kubenswrapper[4955]: I1128 07:01:05.357233 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405221-4mmm4" Nov 28 07:01:13 crc kubenswrapper[4955]: I1128 07:01:13.709978 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:01:13 crc kubenswrapper[4955]: E1128 07:01:13.714040 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:01:26 crc kubenswrapper[4955]: I1128 07:01:26.704381 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:01:26 crc kubenswrapper[4955]: E1128 07:01:26.705854 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:01:40 crc kubenswrapper[4955]: I1128 07:01:40.704666 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:01:40 crc kubenswrapper[4955]: E1128 07:01:40.705518 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:01:52 crc kubenswrapper[4955]: I1128 07:01:52.704724 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:01:52 crc kubenswrapper[4955]: E1128 07:01:52.705585 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:02:05 crc kubenswrapper[4955]: I1128 07:02:05.705063 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:02:05 crc kubenswrapper[4955]: E1128 07:02:05.706616 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:02:16 crc kubenswrapper[4955]: I1128 07:02:16.705263 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:02:16 crc kubenswrapper[4955]: E1128 07:02:16.706111 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:02:27 crc kubenswrapper[4955]: I1128 07:02:27.716834 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:02:27 crc kubenswrapper[4955]: E1128 07:02:27.717969 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:02:39 crc kubenswrapper[4955]: I1128 07:02:39.397101 4955 generic.go:334] "Generic (PLEG): container finished" podID="0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" containerID="1d0266ab5a3991b465a2af43271c3886b044ffc1a0507955a22d3ffbabdbfa6e" exitCode=0 Nov 28 07:02:39 crc kubenswrapper[4955]: I1128 07:02:39.397165 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" event={"ID":"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39","Type":"ContainerDied","Data":"1d0266ab5a3991b465a2af43271c3886b044ffc1a0507955a22d3ffbabdbfa6e"} Nov 28 07:02:39 crc kubenswrapper[4955]: I1128 07:02:39.704637 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:02:39 crc kubenswrapper[4955]: E1128 07:02:39.704908 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.857288 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950188 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950274 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950388 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950441 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cszb\" (UniqueName: \"kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950468 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950665 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950751 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950778 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.950813 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1\") pod \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\" (UID: \"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39\") " Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.957453 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb" (OuterVolumeSpecName: "kube-api-access-8cszb") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "kube-api-access-8cszb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:02:40 crc kubenswrapper[4955]: I1128 07:02:40.987093 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.004641 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.012868 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.014064 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.016786 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.020710 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory" (OuterVolumeSpecName: "inventory") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.026859 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.030832 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" (UID: "0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052519 4955 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052558 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cszb\" (UniqueName: \"kubernetes.io/projected/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-kube-api-access-8cszb\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052574 4955 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052584 4955 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052595 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052604 4955 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052611 4955 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052619 4955 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.052626 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.426933 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" event={"ID":"0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39","Type":"ContainerDied","Data":"34dab8562bf3b1f9a86c4c4fbbd7b70316b1705ccbc82093590b6e9902d688e8"} Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.427332 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34dab8562bf3b1f9a86c4c4fbbd7b70316b1705ccbc82093590b6e9902d688e8" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.427044 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-db78m" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.548994 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb"] Nov 28 07:02:41 crc kubenswrapper[4955]: E1128 07:02:41.549335 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b8ffa87-03b1-4df9-a491-15db50f8a75e" containerName="keystone-cron" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.549347 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b8ffa87-03b1-4df9-a491-15db50f8a75e" containerName="keystone-cron" Nov 28 07:02:41 crc kubenswrapper[4955]: E1128 07:02:41.549360 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.549367 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.549650 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.549677 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b8ffa87-03b1-4df9-a491-15db50f8a75e" containerName="keystone-cron" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.550237 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.553142 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.553159 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2ph7b" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.553223 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.553299 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.553143 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.577624 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb"] Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663356 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663445 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663480 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663518 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663591 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663622 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.663654 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwqbt\" (UniqueName: \"kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764675 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764773 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764800 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764852 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764886 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764908 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwqbt\" (UniqueName: \"kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.764982 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.770313 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.770392 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.771827 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.773644 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.780876 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.781722 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.785948 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwqbt\" (UniqueName: \"kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:41 crc kubenswrapper[4955]: I1128 07:02:41.910176 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:02:42 crc kubenswrapper[4955]: I1128 07:02:42.478650 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb"] Nov 28 07:02:43 crc kubenswrapper[4955]: I1128 07:02:43.448374 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" event={"ID":"d7286bef-2382-464e-95fa-61654cead41d","Type":"ContainerStarted","Data":"609c4c31efe94ef0d7afeff425b2dda157242e468e1f6de283b4b7e4a8428e0d"} Nov 28 07:02:43 crc kubenswrapper[4955]: I1128 07:02:43.449035 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" event={"ID":"d7286bef-2382-464e-95fa-61654cead41d","Type":"ContainerStarted","Data":"93bff2d65282c7422453da1917cd10282cb2aea0262377ddc6368c76791a89a5"} Nov 28 07:02:43 crc kubenswrapper[4955]: I1128 07:02:43.464956 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" podStartSLOduration=1.954456811 podStartE2EDuration="2.464929573s" podCreationTimestamp="2025-11-28 07:02:41 +0000 UTC" firstStartedPulling="2025-11-28 07:02:42.504771496 +0000 UTC m=+2485.094027086" lastFinishedPulling="2025-11-28 07:02:43.015244268 +0000 UTC m=+2485.604499848" observedRunningTime="2025-11-28 07:02:43.461724691 +0000 UTC m=+2486.050980261" watchObservedRunningTime="2025-11-28 07:02:43.464929573 +0000 UTC m=+2486.054185173" Nov 28 07:02:50 crc kubenswrapper[4955]: I1128 07:02:50.705191 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:02:50 crc kubenswrapper[4955]: E1128 07:02:50.706431 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:03:02 crc kubenswrapper[4955]: I1128 07:03:02.704205 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:03:03 crc kubenswrapper[4955]: I1128 07:03:03.666829 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51"} Nov 28 07:05:18 crc kubenswrapper[4955]: I1128 07:05:18.079826 4955 generic.go:334] "Generic (PLEG): container finished" podID="d7286bef-2382-464e-95fa-61654cead41d" containerID="609c4c31efe94ef0d7afeff425b2dda157242e468e1f6de283b4b7e4a8428e0d" exitCode=0 Nov 28 07:05:18 crc kubenswrapper[4955]: I1128 07:05:18.079942 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" event={"ID":"d7286bef-2382-464e-95fa-61654cead41d","Type":"ContainerDied","Data":"609c4c31efe94ef0d7afeff425b2dda157242e468e1f6de283b4b7e4a8428e0d"} Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.505861 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.548016 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.549722 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.549921 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.550151 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.550880 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.551144 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.551264 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwqbt\" (UniqueName: \"kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt\") pod \"d7286bef-2382-464e-95fa-61654cead41d\" (UID: \"d7286bef-2382-464e-95fa-61654cead41d\") " Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.566654 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.567476 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt" (OuterVolumeSpecName: "kube-api-access-pwqbt") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "kube-api-access-pwqbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.587755 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory" (OuterVolumeSpecName: "inventory") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.597030 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.599711 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.601312 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.608347 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "d7286bef-2382-464e-95fa-61654cead41d" (UID: "d7286bef-2382-464e-95fa-61654cead41d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653756 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653787 4955 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653798 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwqbt\" (UniqueName: \"kubernetes.io/projected/d7286bef-2382-464e-95fa-61654cead41d-kube-api-access-pwqbt\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653807 4955 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653815 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653824 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:19 crc kubenswrapper[4955]: I1128 07:05:19.653833 4955 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7286bef-2382-464e-95fa-61654cead41d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 07:05:20 crc kubenswrapper[4955]: I1128 07:05:20.101930 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" event={"ID":"d7286bef-2382-464e-95fa-61654cead41d","Type":"ContainerDied","Data":"93bff2d65282c7422453da1917cd10282cb2aea0262377ddc6368c76791a89a5"} Nov 28 07:05:20 crc kubenswrapper[4955]: I1128 07:05:20.101989 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb" Nov 28 07:05:20 crc kubenswrapper[4955]: I1128 07:05:20.101992 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93bff2d65282c7422453da1917cd10282cb2aea0262377ddc6368c76791a89a5" Nov 28 07:05:23 crc kubenswrapper[4955]: I1128 07:05:23.394353 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:05:23 crc kubenswrapper[4955]: I1128 07:05:23.394686 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:05:27 crc kubenswrapper[4955]: I1128 07:05:27.879841 4955 scope.go:117] "RemoveContainer" containerID="d435a41a298fca96b8a306017e2557ecc3d72e80717fdc4646ff4879df33ed14" Nov 28 07:05:27 crc kubenswrapper[4955]: I1128 07:05:27.903445 4955 scope.go:117] "RemoveContainer" containerID="de43545f4a82510a6cef74d5a58a1a765b731b99f160ece3bea30a260c5ade43" Nov 28 07:05:27 crc kubenswrapper[4955]: I1128 07:05:27.928542 4955 scope.go:117] "RemoveContainer" containerID="a854c992e65ba62abfe5f9b3ed85b1b3cdc18187cc777ae93958e027aa745d5b" Nov 28 07:05:53 crc kubenswrapper[4955]: I1128 07:05:53.392615 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:05:53 crc kubenswrapper[4955]: I1128 07:05:53.393600 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.356783 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 07:06:16 crc kubenswrapper[4955]: E1128 07:06:16.357865 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7286bef-2382-464e-95fa-61654cead41d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.357887 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7286bef-2382-464e-95fa-61654cead41d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.358131 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7286bef-2382-464e-95fa-61654cead41d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.358958 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.362366 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.362368 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47mgs" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.362493 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.362922 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.381053 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470368 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470443 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470522 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470574 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470640 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vjp\" (UniqueName: \"kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470689 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.470813 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.471075 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.572994 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52vjp\" (UniqueName: \"kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573115 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573169 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573211 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573291 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573484 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573567 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573613 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.573664 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.574271 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.574287 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.575038 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.575230 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.575251 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.580944 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.587414 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.588264 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.595627 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52vjp\" (UniqueName: \"kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.620544 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " pod="openstack/tempest-tests-tempest" Nov 28 07:06:16 crc kubenswrapper[4955]: I1128 07:06:16.692130 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 07:06:17 crc kubenswrapper[4955]: I1128 07:06:17.220398 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 07:06:17 crc kubenswrapper[4955]: I1128 07:06:17.231855 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:06:17 crc kubenswrapper[4955]: I1128 07:06:17.769083 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81ccd45f-3f32-4e86-8874-0468a6fc2471","Type":"ContainerStarted","Data":"4fc74c2740e1d7d7eaa03a9b751cec5a50c86036f9193da8c33657623c49ce01"} Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.393415 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.394195 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.394255 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.395130 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.395201 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51" gracePeriod=600 Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.851980 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51" exitCode=0 Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.852090 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51"} Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.852414 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f"} Nov 28 07:06:23 crc kubenswrapper[4955]: I1128 07:06:23.852439 4955 scope.go:117] "RemoveContainer" containerID="8fe2cd86a7c797d0af182538abde38d49fe31c9c4b4aa7d2d2f51630fc112e38" Nov 28 07:06:49 crc kubenswrapper[4955]: E1128 07:06:49.802162 4955 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 28 07:06:49 crc kubenswrapper[4955]: E1128 07:06:49.802839 4955 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(81ccd45f-3f32-4e86-8874-0468a6fc2471): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 07:06:49 crc kubenswrapper[4955]: E1128 07:06:49.804102 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="81ccd45f-3f32-4e86-8874-0468a6fc2471" Nov 28 07:06:50 crc kubenswrapper[4955]: E1128 07:06:50.139985 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="81ccd45f-3f32-4e86-8874-0468a6fc2471" Nov 28 07:07:01 crc kubenswrapper[4955]: I1128 07:07:01.486959 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 07:07:03 crc kubenswrapper[4955]: I1128 07:07:03.286760 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81ccd45f-3f32-4e86-8874-0468a6fc2471","Type":"ContainerStarted","Data":"62d2b76a5bd85fe105df65b2b77cf9506efe62ea05d86252c3dca1bcfd73aff9"} Nov 28 07:07:03 crc kubenswrapper[4955]: I1128 07:07:03.311418 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.059092754 podStartE2EDuration="48.311394631s" podCreationTimestamp="2025-11-28 07:06:15 +0000 UTC" firstStartedPulling="2025-11-28 07:06:17.231562693 +0000 UTC m=+2699.820818273" lastFinishedPulling="2025-11-28 07:07:01.48386457 +0000 UTC m=+2744.073120150" observedRunningTime="2025-11-28 07:07:03.309800525 +0000 UTC m=+2745.899056085" watchObservedRunningTime="2025-11-28 07:07:03.311394631 +0000 UTC m=+2745.900650211" Nov 28 07:08:23 crc kubenswrapper[4955]: I1128 07:08:23.392969 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:08:23 crc kubenswrapper[4955]: I1128 07:08:23.393617 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:08:53 crc kubenswrapper[4955]: I1128 07:08:53.393851 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:08:53 crc kubenswrapper[4955]: I1128 07:08:53.395409 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.392901 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.393690 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.393795 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.394935 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.395038 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" gracePeriod=600 Nov 28 07:09:23 crc kubenswrapper[4955]: E1128 07:09:23.515923 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.718548 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" exitCode=0 Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.726815 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f"} Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.726883 4955 scope.go:117] "RemoveContainer" containerID="cc0ebb0d086f1a20320ac87420714bd2825de126b0e780570d4a06ff826fbf51" Nov 28 07:09:23 crc kubenswrapper[4955]: I1128 07:09:23.727213 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:09:23 crc kubenswrapper[4955]: E1128 07:09:23.727491 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.367486 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.420264 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.423213 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.539664 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.539723 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.539789 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz42x\" (UniqueName: \"kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.641522 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.641580 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.641655 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz42x\" (UniqueName: \"kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.641999 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.642121 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.662725 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz42x\" (UniqueName: \"kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x\") pod \"certified-operators-tw8kc\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:37 crc kubenswrapper[4955]: I1128 07:09:37.749670 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:38 crc kubenswrapper[4955]: I1128 07:09:38.253431 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:38 crc kubenswrapper[4955]: I1128 07:09:38.704636 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:09:38 crc kubenswrapper[4955]: E1128 07:09:38.704973 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:09:38 crc kubenswrapper[4955]: I1128 07:09:38.876040 4955 generic.go:334] "Generic (PLEG): container finished" podID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerID="d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601" exitCode=0 Nov 28 07:09:38 crc kubenswrapper[4955]: I1128 07:09:38.876160 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerDied","Data":"d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601"} Nov 28 07:09:38 crc kubenswrapper[4955]: I1128 07:09:38.876614 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerStarted","Data":"031c4d83bab8c098386856a8d5404005e77393c70309cf9623c8217e601bd2e2"} Nov 28 07:09:40 crc kubenswrapper[4955]: I1128 07:09:40.898488 4955 generic.go:334] "Generic (PLEG): container finished" podID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerID="51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08" exitCode=0 Nov 28 07:09:40 crc kubenswrapper[4955]: I1128 07:09:40.898608 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerDied","Data":"51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08"} Nov 28 07:09:41 crc kubenswrapper[4955]: I1128 07:09:41.909207 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerStarted","Data":"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77"} Nov 28 07:09:41 crc kubenswrapper[4955]: I1128 07:09:41.930616 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tw8kc" podStartSLOduration=2.171932751 podStartE2EDuration="4.930599378s" podCreationTimestamp="2025-11-28 07:09:37 +0000 UTC" firstStartedPulling="2025-11-28 07:09:38.88003919 +0000 UTC m=+2901.469294800" lastFinishedPulling="2025-11-28 07:09:41.638705827 +0000 UTC m=+2904.227961427" observedRunningTime="2025-11-28 07:09:41.926646795 +0000 UTC m=+2904.515902375" watchObservedRunningTime="2025-11-28 07:09:41.930599378 +0000 UTC m=+2904.519854948" Nov 28 07:09:47 crc kubenswrapper[4955]: I1128 07:09:47.749852 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:47 crc kubenswrapper[4955]: I1128 07:09:47.750766 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:47 crc kubenswrapper[4955]: I1128 07:09:47.806051 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:48 crc kubenswrapper[4955]: I1128 07:09:48.027887 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:48 crc kubenswrapper[4955]: I1128 07:09:48.081290 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:49 crc kubenswrapper[4955]: I1128 07:09:49.995812 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tw8kc" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="registry-server" containerID="cri-o://f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77" gracePeriod=2 Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.449085 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.452425 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.467869 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.542686 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.542744 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.542845 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrkk\" (UniqueName: \"kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.544006 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.644179 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content\") pod \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.644602 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz42x\" (UniqueName: \"kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x\") pod \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.644794 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities\") pod \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\" (UID: \"c40e9dfe-e6bd-4984-9421-ad20b3088f1b\") " Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.645116 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.645202 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.645295 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hrkk\" (UniqueName: \"kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.646074 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.646136 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.646561 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities" (OuterVolumeSpecName: "utilities") pod "c40e9dfe-e6bd-4984-9421-ad20b3088f1b" (UID: "c40e9dfe-e6bd-4984-9421-ad20b3088f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.652666 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x" (OuterVolumeSpecName: "kube-api-access-wz42x") pod "c40e9dfe-e6bd-4984-9421-ad20b3088f1b" (UID: "c40e9dfe-e6bd-4984-9421-ad20b3088f1b"). InnerVolumeSpecName "kube-api-access-wz42x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.664371 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hrkk\" (UniqueName: \"kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk\") pod \"redhat-operators-qs4dm\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.694063 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c40e9dfe-e6bd-4984-9421-ad20b3088f1b" (UID: "c40e9dfe-e6bd-4984-9421-ad20b3088f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.747158 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.747191 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.747202 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz42x\" (UniqueName: \"kubernetes.io/projected/c40e9dfe-e6bd-4984-9421-ad20b3088f1b-kube-api-access-wz42x\") on node \"crc\" DevicePath \"\"" Nov 28 07:09:50 crc kubenswrapper[4955]: I1128 07:09:50.855740 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.008674 4955 generic.go:334] "Generic (PLEG): container finished" podID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerID="f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77" exitCode=0 Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.008891 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerDied","Data":"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77"} Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.008963 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tw8kc" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.009034 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tw8kc" event={"ID":"c40e9dfe-e6bd-4984-9421-ad20b3088f1b","Type":"ContainerDied","Data":"031c4d83bab8c098386856a8d5404005e77393c70309cf9623c8217e601bd2e2"} Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.009070 4955 scope.go:117] "RemoveContainer" containerID="f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.040049 4955 scope.go:117] "RemoveContainer" containerID="51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.058861 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.069208 4955 scope.go:117] "RemoveContainer" containerID="d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.069880 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tw8kc"] Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.094832 4955 scope.go:117] "RemoveContainer" containerID="f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77" Nov 28 07:09:51 crc kubenswrapper[4955]: E1128 07:09:51.096076 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77\": container with ID starting with f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77 not found: ID does not exist" containerID="f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.096106 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77"} err="failed to get container status \"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77\": rpc error: code = NotFound desc = could not find container \"f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77\": container with ID starting with f6825e8d995069f7a350339eae7c83464f4378c698bc98ab17f89ccedf113f77 not found: ID does not exist" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.096128 4955 scope.go:117] "RemoveContainer" containerID="51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08" Nov 28 07:09:51 crc kubenswrapper[4955]: E1128 07:09:51.096648 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08\": container with ID starting with 51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08 not found: ID does not exist" containerID="51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.096665 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08"} err="failed to get container status \"51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08\": rpc error: code = NotFound desc = could not find container \"51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08\": container with ID starting with 51e45dea247f0ebc2416282bc74a31847fbfbb71600ffbad7adf8769602d9f08 not found: ID does not exist" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.096678 4955 scope.go:117] "RemoveContainer" containerID="d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601" Nov 28 07:09:51 crc kubenswrapper[4955]: E1128 07:09:51.096881 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601\": container with ID starting with d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601 not found: ID does not exist" containerID="d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.096898 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601"} err="failed to get container status \"d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601\": rpc error: code = NotFound desc = could not find container \"d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601\": container with ID starting with d8502163ecdf58d89b3d57e745ee4154081d2e5437e5eec5744a1ee3e59c2601 not found: ID does not exist" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.309362 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.704376 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:09:51 crc kubenswrapper[4955]: E1128 07:09:51.704950 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:09:51 crc kubenswrapper[4955]: I1128 07:09:51.725919 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" path="/var/lib/kubelet/pods/c40e9dfe-e6bd-4984-9421-ad20b3088f1b/volumes" Nov 28 07:09:52 crc kubenswrapper[4955]: I1128 07:09:52.019799 4955 generic.go:334] "Generic (PLEG): container finished" podID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerID="b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada" exitCode=0 Nov 28 07:09:52 crc kubenswrapper[4955]: I1128 07:09:52.019880 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerDied","Data":"b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada"} Nov 28 07:09:52 crc kubenswrapper[4955]: I1128 07:09:52.019910 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerStarted","Data":"0c1ecaf7838a6742bb20892c7d8dd105af6d71241e0119cf7fa54bb0533281fb"} Nov 28 07:09:53 crc kubenswrapper[4955]: I1128 07:09:53.033483 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerStarted","Data":"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf"} Nov 28 07:09:56 crc kubenswrapper[4955]: I1128 07:09:56.066086 4955 generic.go:334] "Generic (PLEG): container finished" podID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerID="3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf" exitCode=0 Nov 28 07:09:56 crc kubenswrapper[4955]: I1128 07:09:56.066181 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerDied","Data":"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf"} Nov 28 07:09:57 crc kubenswrapper[4955]: I1128 07:09:57.080061 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerStarted","Data":"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33"} Nov 28 07:09:57 crc kubenswrapper[4955]: I1128 07:09:57.112629 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qs4dm" podStartSLOduration=2.634687592 podStartE2EDuration="7.112596493s" podCreationTimestamp="2025-11-28 07:09:50 +0000 UTC" firstStartedPulling="2025-11-28 07:09:52.021628514 +0000 UTC m=+2914.610884084" lastFinishedPulling="2025-11-28 07:09:56.499537415 +0000 UTC m=+2919.088792985" observedRunningTime="2025-11-28 07:09:57.096579465 +0000 UTC m=+2919.685835075" watchObservedRunningTime="2025-11-28 07:09:57.112596493 +0000 UTC m=+2919.701852073" Nov 28 07:10:00 crc kubenswrapper[4955]: I1128 07:10:00.856711 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:00 crc kubenswrapper[4955]: I1128 07:10:00.857171 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:01 crc kubenswrapper[4955]: I1128 07:10:01.914287 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qs4dm" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="registry-server" probeResult="failure" output=< Nov 28 07:10:01 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 07:10:01 crc kubenswrapper[4955]: > Nov 28 07:10:03 crc kubenswrapper[4955]: I1128 07:10:03.705035 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:10:03 crc kubenswrapper[4955]: E1128 07:10:03.705721 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.959793 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:09 crc kubenswrapper[4955]: E1128 07:10:09.960981 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="registry-server" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.961002 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="registry-server" Nov 28 07:10:09 crc kubenswrapper[4955]: E1128 07:10:09.961082 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="extract-utilities" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.961096 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="extract-utilities" Nov 28 07:10:09 crc kubenswrapper[4955]: E1128 07:10:09.961120 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="extract-content" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.961132 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="extract-content" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.961437 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40e9dfe-e6bd-4984-9421-ad20b3088f1b" containerName="registry-server" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.964599 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:09 crc kubenswrapper[4955]: I1128 07:10:09.973865 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.039244 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skqww\" (UniqueName: \"kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.039322 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.039486 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.141386 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.141441 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skqww\" (UniqueName: \"kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.141486 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.142247 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.142444 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.167560 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skqww\" (UniqueName: \"kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww\") pod \"redhat-marketplace-nfj8x\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.302513 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.783614 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:10 crc kubenswrapper[4955]: W1128 07:10:10.792305 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22637d60_a423_47fe_910a_7b2b98ae6eda.slice/crio-1ec32d0efd2504336df71452af9b2e282b4f420bdbef72b6441f5d9a3db80f1f WatchSource:0}: Error finding container 1ec32d0efd2504336df71452af9b2e282b4f420bdbef72b6441f5d9a3db80f1f: Status 404 returned error can't find the container with id 1ec32d0efd2504336df71452af9b2e282b4f420bdbef72b6441f5d9a3db80f1f Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.906962 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:10 crc kubenswrapper[4955]: I1128 07:10:10.969989 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:11 crc kubenswrapper[4955]: I1128 07:10:11.213592 4955 generic.go:334] "Generic (PLEG): container finished" podID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerID="c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9" exitCode=0 Nov 28 07:10:11 crc kubenswrapper[4955]: I1128 07:10:11.213714 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerDied","Data":"c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9"} Nov 28 07:10:11 crc kubenswrapper[4955]: I1128 07:10:11.213754 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerStarted","Data":"1ec32d0efd2504336df71452af9b2e282b4f420bdbef72b6441f5d9a3db80f1f"} Nov 28 07:10:12 crc kubenswrapper[4955]: I1128 07:10:12.226429 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerStarted","Data":"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15"} Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.238001 4955 generic.go:334] "Generic (PLEG): container finished" podID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerID="da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15" exitCode=0 Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.238103 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerDied","Data":"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15"} Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.326770 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.327037 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qs4dm" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="registry-server" containerID="cri-o://e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33" gracePeriod=2 Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.857288 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.915275 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content\") pod \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.915360 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities\") pod \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.915568 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hrkk\" (UniqueName: \"kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk\") pod \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\" (UID: \"27b8d28e-3fdf-4a23-b53b-0be6029f3358\") " Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.916694 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities" (OuterVolumeSpecName: "utilities") pod "27b8d28e-3fdf-4a23-b53b-0be6029f3358" (UID: "27b8d28e-3fdf-4a23-b53b-0be6029f3358"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.918574 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:13 crc kubenswrapper[4955]: I1128 07:10:13.931321 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk" (OuterVolumeSpecName: "kube-api-access-2hrkk") pod "27b8d28e-3fdf-4a23-b53b-0be6029f3358" (UID: "27b8d28e-3fdf-4a23-b53b-0be6029f3358"). InnerVolumeSpecName "kube-api-access-2hrkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.009927 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27b8d28e-3fdf-4a23-b53b-0be6029f3358" (UID: "27b8d28e-3fdf-4a23-b53b-0be6029f3358"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.020001 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27b8d28e-3fdf-4a23-b53b-0be6029f3358-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.020048 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hrkk\" (UniqueName: \"kubernetes.io/projected/27b8d28e-3fdf-4a23-b53b-0be6029f3358-kube-api-access-2hrkk\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.250592 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerStarted","Data":"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3"} Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.252457 4955 generic.go:334] "Generic (PLEG): container finished" podID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerID="e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33" exitCode=0 Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.252581 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerDied","Data":"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33"} Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.252609 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qs4dm" event={"ID":"27b8d28e-3fdf-4a23-b53b-0be6029f3358","Type":"ContainerDied","Data":"0c1ecaf7838a6742bb20892c7d8dd105af6d71241e0119cf7fa54bb0533281fb"} Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.252624 4955 scope.go:117] "RemoveContainer" containerID="e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.252623 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qs4dm" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.274892 4955 scope.go:117] "RemoveContainer" containerID="3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.290200 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nfj8x" podStartSLOduration=2.611167938 podStartE2EDuration="5.290170216s" podCreationTimestamp="2025-11-28 07:10:09 +0000 UTC" firstStartedPulling="2025-11-28 07:10:11.216460026 +0000 UTC m=+2933.805715616" lastFinishedPulling="2025-11-28 07:10:13.895462324 +0000 UTC m=+2936.484717894" observedRunningTime="2025-11-28 07:10:14.277885524 +0000 UTC m=+2936.867141114" watchObservedRunningTime="2025-11-28 07:10:14.290170216 +0000 UTC m=+2936.879425826" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.311578 4955 scope.go:117] "RemoveContainer" containerID="b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.316756 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.327548 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qs4dm"] Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.336483 4955 scope.go:117] "RemoveContainer" containerID="e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33" Nov 28 07:10:14 crc kubenswrapper[4955]: E1128 07:10:14.336916 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33\": container with ID starting with e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33 not found: ID does not exist" containerID="e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.337016 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33"} err="failed to get container status \"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33\": rpc error: code = NotFound desc = could not find container \"e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33\": container with ID starting with e64c3882ffaee8451247265f5bb2febee1e7395ffc1f1235881972b3ff566f33 not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.337118 4955 scope.go:117] "RemoveContainer" containerID="3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf" Nov 28 07:10:14 crc kubenswrapper[4955]: E1128 07:10:14.338497 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf\": container with ID starting with 3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf not found: ID does not exist" containerID="3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.338608 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf"} err="failed to get container status \"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf\": rpc error: code = NotFound desc = could not find container \"3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf\": container with ID starting with 3bbc4af4139a1fba7d8efc5b2fda39b428b72c698b577de91f2ed5351ef3cacf not found: ID does not exist" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.338675 4955 scope.go:117] "RemoveContainer" containerID="b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada" Nov 28 07:10:14 crc kubenswrapper[4955]: E1128 07:10:14.339359 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada\": container with ID starting with b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada not found: ID does not exist" containerID="b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada" Nov 28 07:10:14 crc kubenswrapper[4955]: I1128 07:10:14.339403 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada"} err="failed to get container status \"b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada\": rpc error: code = NotFound desc = could not find container \"b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada\": container with ID starting with b3f5be1895e53ebc6021a5e2d8335c776f02ff3748ea12efbaa8294e8d44dada not found: ID does not exist" Nov 28 07:10:15 crc kubenswrapper[4955]: I1128 07:10:15.726325 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" path="/var/lib/kubelet/pods/27b8d28e-3fdf-4a23-b53b-0be6029f3358/volumes" Nov 28 07:10:18 crc kubenswrapper[4955]: I1128 07:10:18.704801 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:10:18 crc kubenswrapper[4955]: E1128 07:10:18.705804 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:10:20 crc kubenswrapper[4955]: I1128 07:10:20.302761 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:20 crc kubenswrapper[4955]: I1128 07:10:20.303979 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:20 crc kubenswrapper[4955]: I1128 07:10:20.359445 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:21 crc kubenswrapper[4955]: I1128 07:10:21.372028 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:21 crc kubenswrapper[4955]: I1128 07:10:21.619829 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:23 crc kubenswrapper[4955]: I1128 07:10:23.341962 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nfj8x" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="registry-server" containerID="cri-o://9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3" gracePeriod=2 Nov 28 07:10:23 crc kubenswrapper[4955]: I1128 07:10:23.928152 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.048345 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities\") pod \"22637d60-a423-47fe-910a-7b2b98ae6eda\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.048710 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content\") pod \"22637d60-a423-47fe-910a-7b2b98ae6eda\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.048781 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skqww\" (UniqueName: \"kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww\") pod \"22637d60-a423-47fe-910a-7b2b98ae6eda\" (UID: \"22637d60-a423-47fe-910a-7b2b98ae6eda\") " Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.049761 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities" (OuterVolumeSpecName: "utilities") pod "22637d60-a423-47fe-910a-7b2b98ae6eda" (UID: "22637d60-a423-47fe-910a-7b2b98ae6eda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.057775 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww" (OuterVolumeSpecName: "kube-api-access-skqww") pod "22637d60-a423-47fe-910a-7b2b98ae6eda" (UID: "22637d60-a423-47fe-910a-7b2b98ae6eda"). InnerVolumeSpecName "kube-api-access-skqww". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.073855 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22637d60-a423-47fe-910a-7b2b98ae6eda" (UID: "22637d60-a423-47fe-910a-7b2b98ae6eda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.150309 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.150345 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skqww\" (UniqueName: \"kubernetes.io/projected/22637d60-a423-47fe-910a-7b2b98ae6eda-kube-api-access-skqww\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.150358 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22637d60-a423-47fe-910a-7b2b98ae6eda-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.365496 4955 generic.go:334] "Generic (PLEG): container finished" podID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerID="9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3" exitCode=0 Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.365559 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerDied","Data":"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3"} Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.365585 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfj8x" event={"ID":"22637d60-a423-47fe-910a-7b2b98ae6eda","Type":"ContainerDied","Data":"1ec32d0efd2504336df71452af9b2e282b4f420bdbef72b6441f5d9a3db80f1f"} Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.365603 4955 scope.go:117] "RemoveContainer" containerID="9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.365636 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfj8x" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.408589 4955 scope.go:117] "RemoveContainer" containerID="da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.409627 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.419984 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfj8x"] Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.442674 4955 scope.go:117] "RemoveContainer" containerID="c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.487418 4955 scope.go:117] "RemoveContainer" containerID="9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3" Nov 28 07:10:24 crc kubenswrapper[4955]: E1128 07:10:24.487865 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3\": container with ID starting with 9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3 not found: ID does not exist" containerID="9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.487899 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3"} err="failed to get container status \"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3\": rpc error: code = NotFound desc = could not find container \"9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3\": container with ID starting with 9cb5f475a28f55a2329700cc5d86dacb1dcf615db50f9730d55bdacbce791ce3 not found: ID does not exist" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.487925 4955 scope.go:117] "RemoveContainer" containerID="da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15" Nov 28 07:10:24 crc kubenswrapper[4955]: E1128 07:10:24.488676 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15\": container with ID starting with da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15 not found: ID does not exist" containerID="da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.488737 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15"} err="failed to get container status \"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15\": rpc error: code = NotFound desc = could not find container \"da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15\": container with ID starting with da34a503d764c280697a46f83d1370e83310d403281b391a8a629fd4b32c0e15 not found: ID does not exist" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.488782 4955 scope.go:117] "RemoveContainer" containerID="c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9" Nov 28 07:10:24 crc kubenswrapper[4955]: E1128 07:10:24.489256 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9\": container with ID starting with c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9 not found: ID does not exist" containerID="c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9" Nov 28 07:10:24 crc kubenswrapper[4955]: I1128 07:10:24.489335 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9"} err="failed to get container status \"c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9\": rpc error: code = NotFound desc = could not find container \"c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9\": container with ID starting with c4f0a9214799a42ee8973bb110ec79a643dbf5a81bb533cfcfa7c40d795a6aa9 not found: ID does not exist" Nov 28 07:10:25 crc kubenswrapper[4955]: I1128 07:10:25.713721 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" path="/var/lib/kubelet/pods/22637d60-a423-47fe-910a-7b2b98ae6eda/volumes" Nov 28 07:10:30 crc kubenswrapper[4955]: I1128 07:10:30.704933 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:10:30 crc kubenswrapper[4955]: E1128 07:10:30.705755 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:10:45 crc kubenswrapper[4955]: I1128 07:10:45.704667 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:10:45 crc kubenswrapper[4955]: E1128 07:10:45.705870 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:10:58 crc kubenswrapper[4955]: I1128 07:10:58.704414 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:10:58 crc kubenswrapper[4955]: E1128 07:10:58.705633 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:11:09 crc kubenswrapper[4955]: I1128 07:11:09.704280 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:11:09 crc kubenswrapper[4955]: E1128 07:11:09.705410 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:11:22 crc kubenswrapper[4955]: I1128 07:11:22.704138 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:11:22 crc kubenswrapper[4955]: E1128 07:11:22.706250 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:11:34 crc kubenswrapper[4955]: I1128 07:11:34.704345 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:11:34 crc kubenswrapper[4955]: E1128 07:11:34.705261 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:11:48 crc kubenswrapper[4955]: I1128 07:11:48.705066 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:11:48 crc kubenswrapper[4955]: E1128 07:11:48.707534 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:12:00 crc kubenswrapper[4955]: I1128 07:12:00.704855 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:12:00 crc kubenswrapper[4955]: E1128 07:12:00.706131 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:12:12 crc kubenswrapper[4955]: I1128 07:12:12.704035 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:12:12 crc kubenswrapper[4955]: E1128 07:12:12.704786 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:12:25 crc kubenswrapper[4955]: I1128 07:12:25.705169 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:12:25 crc kubenswrapper[4955]: E1128 07:12:25.706224 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:12:39 crc kubenswrapper[4955]: I1128 07:12:39.704528 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:12:39 crc kubenswrapper[4955]: E1128 07:12:39.705315 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:12:50 crc kubenswrapper[4955]: I1128 07:12:50.704859 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:12:50 crc kubenswrapper[4955]: E1128 07:12:50.705883 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:13:02 crc kubenswrapper[4955]: I1128 07:13:02.704737 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:13:02 crc kubenswrapper[4955]: E1128 07:13:02.705923 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:13:16 crc kubenswrapper[4955]: I1128 07:13:16.704249 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:13:16 crc kubenswrapper[4955]: E1128 07:13:16.704979 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:13:30 crc kubenswrapper[4955]: I1128 07:13:30.705550 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:13:30 crc kubenswrapper[4955]: E1128 07:13:30.706832 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:13:43 crc kubenswrapper[4955]: I1128 07:13:43.705368 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:13:43 crc kubenswrapper[4955]: E1128 07:13:43.706440 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:13:56 crc kubenswrapper[4955]: I1128 07:13:56.704872 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:13:56 crc kubenswrapper[4955]: E1128 07:13:56.705770 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:14:08 crc kubenswrapper[4955]: I1128 07:14:08.704548 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:14:08 crc kubenswrapper[4955]: E1128 07:14:08.705386 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:14:19 crc kubenswrapper[4955]: I1128 07:14:19.705558 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:14:19 crc kubenswrapper[4955]: E1128 07:14:19.706484 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:14:30 crc kubenswrapper[4955]: I1128 07:14:30.704777 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:14:31 crc kubenswrapper[4955]: I1128 07:14:31.115582 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5"} Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.142583 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh"] Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143670 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143683 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143694 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143702 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="extract-content" Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143724 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143731 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143755 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143761 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143771 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143777 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4955]: E1128 07:15:00.143787 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143792 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="extract-utilities" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.143998 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b8d28e-3fdf-4a23-b53b-0be6029f3358" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.144024 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="22637d60-a423-47fe-910a-7b2b98ae6eda" containerName="registry-server" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.145100 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.148133 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.148236 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.155741 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh"] Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.223780 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.223906 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b59r\" (UniqueName: \"kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.223988 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.325820 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.325898 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b59r\" (UniqueName: \"kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.325971 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.326947 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.340181 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.348186 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b59r\" (UniqueName: \"kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r\") pod \"collect-profiles-29405235-2vdxh\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.469342 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:00 crc kubenswrapper[4955]: I1128 07:15:00.752918 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh"] Nov 28 07:15:01 crc kubenswrapper[4955]: I1128 07:15:01.423336 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" event={"ID":"5326bf32-ec7c-45f6-a446-7535bbe8ab1c","Type":"ContainerStarted","Data":"208afecfd5ab6a434659db0bb88976d39eddd6318dd637147b335740de48a6a6"} Nov 28 07:15:01 crc kubenswrapper[4955]: I1128 07:15:01.423631 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" event={"ID":"5326bf32-ec7c-45f6-a446-7535bbe8ab1c","Type":"ContainerStarted","Data":"136426a7911b750305014ed682a8cb3e47bf89e06c9b5262078fec7610d84068"} Nov 28 07:15:01 crc kubenswrapper[4955]: I1128 07:15:01.445448 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" podStartSLOduration=1.445428384 podStartE2EDuration="1.445428384s" podCreationTimestamp="2025-11-28 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:15:01.438623579 +0000 UTC m=+3224.027879159" watchObservedRunningTime="2025-11-28 07:15:01.445428384 +0000 UTC m=+3224.034683944" Nov 28 07:15:02 crc kubenswrapper[4955]: I1128 07:15:02.437247 4955 generic.go:334] "Generic (PLEG): container finished" podID="5326bf32-ec7c-45f6-a446-7535bbe8ab1c" containerID="208afecfd5ab6a434659db0bb88976d39eddd6318dd637147b335740de48a6a6" exitCode=0 Nov 28 07:15:02 crc kubenswrapper[4955]: I1128 07:15:02.437353 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" event={"ID":"5326bf32-ec7c-45f6-a446-7535bbe8ab1c","Type":"ContainerDied","Data":"208afecfd5ab6a434659db0bb88976d39eddd6318dd637147b335740de48a6a6"} Nov 28 07:15:03 crc kubenswrapper[4955]: I1128 07:15:03.919440 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.051528 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume\") pod \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.051788 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume\") pod \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.051840 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b59r\" (UniqueName: \"kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r\") pod \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\" (UID: \"5326bf32-ec7c-45f6-a446-7535bbe8ab1c\") " Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.052660 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "5326bf32-ec7c-45f6-a446-7535bbe8ab1c" (UID: "5326bf32-ec7c-45f6-a446-7535bbe8ab1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.057663 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r" (OuterVolumeSpecName: "kube-api-access-9b59r") pod "5326bf32-ec7c-45f6-a446-7535bbe8ab1c" (UID: "5326bf32-ec7c-45f6-a446-7535bbe8ab1c"). InnerVolumeSpecName "kube-api-access-9b59r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.064468 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5326bf32-ec7c-45f6-a446-7535bbe8ab1c" (UID: "5326bf32-ec7c-45f6-a446-7535bbe8ab1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.155796 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.155883 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b59r\" (UniqueName: \"kubernetes.io/projected/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-kube-api-access-9b59r\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.155938 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5326bf32-ec7c-45f6-a446-7535bbe8ab1c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.488131 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" event={"ID":"5326bf32-ec7c-45f6-a446-7535bbe8ab1c","Type":"ContainerDied","Data":"136426a7911b750305014ed682a8cb3e47bf89e06c9b5262078fec7610d84068"} Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.488659 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="136426a7911b750305014ed682a8cb3e47bf89e06c9b5262078fec7610d84068" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.488245 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405235-2vdxh" Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.553922 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt"] Nov 28 07:15:04 crc kubenswrapper[4955]: I1128 07:15:04.561564 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405190-fvtvt"] Nov 28 07:15:05 crc kubenswrapper[4955]: I1128 07:15:05.716703 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4fa1542-f019-497c-bda9-8e389b823683" path="/var/lib/kubelet/pods/a4fa1542-f019-497c-bda9-8e389b823683/volumes" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.452983 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:21 crc kubenswrapper[4955]: E1128 07:15:21.454079 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5326bf32-ec7c-45f6-a446-7535bbe8ab1c" containerName="collect-profiles" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.454091 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5326bf32-ec7c-45f6-a446-7535bbe8ab1c" containerName="collect-profiles" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.454311 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5326bf32-ec7c-45f6-a446-7535bbe8ab1c" containerName="collect-profiles" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.455619 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.494177 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.503648 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.503775 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.503842 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.606220 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.606299 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.606335 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.606808 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.607127 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.626785 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7\") pod \"community-operators-49nzt\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:21 crc kubenswrapper[4955]: I1128 07:15:21.777681 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:22 crc kubenswrapper[4955]: I1128 07:15:22.359539 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:22 crc kubenswrapper[4955]: I1128 07:15:22.699428 4955 generic.go:334] "Generic (PLEG): container finished" podID="3005c275-414b-46b1-b8c4-613738e91224" containerID="cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646" exitCode=0 Nov 28 07:15:22 crc kubenswrapper[4955]: I1128 07:15:22.699642 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerDied","Data":"cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646"} Nov 28 07:15:22 crc kubenswrapper[4955]: I1128 07:15:22.701125 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerStarted","Data":"510cc3ba9c366be8f56d423e2cfafb57f377b7e0b0b1c706059b8721eafeea94"} Nov 28 07:15:22 crc kubenswrapper[4955]: I1128 07:15:22.701615 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:15:24 crc kubenswrapper[4955]: I1128 07:15:24.722306 4955 generic.go:334] "Generic (PLEG): container finished" podID="3005c275-414b-46b1-b8c4-613738e91224" containerID="20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd" exitCode=0 Nov 28 07:15:24 crc kubenswrapper[4955]: I1128 07:15:24.722422 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerDied","Data":"20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd"} Nov 28 07:15:25 crc kubenswrapper[4955]: I1128 07:15:25.739232 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerStarted","Data":"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4"} Nov 28 07:15:25 crc kubenswrapper[4955]: I1128 07:15:25.769156 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49nzt" podStartSLOduration=2.224045471 podStartE2EDuration="4.769134808s" podCreationTimestamp="2025-11-28 07:15:21 +0000 UTC" firstStartedPulling="2025-11-28 07:15:22.701351017 +0000 UTC m=+3245.290606587" lastFinishedPulling="2025-11-28 07:15:25.246440354 +0000 UTC m=+3247.835695924" observedRunningTime="2025-11-28 07:15:25.75943848 +0000 UTC m=+3248.348694080" watchObservedRunningTime="2025-11-28 07:15:25.769134808 +0000 UTC m=+3248.358390388" Nov 28 07:15:28 crc kubenswrapper[4955]: I1128 07:15:28.246814 4955 scope.go:117] "RemoveContainer" containerID="11b3c98b0d8d9c62ac27721527ee585a93cc193ddc08c69476a20d0a3dbcb4c2" Nov 28 07:15:31 crc kubenswrapper[4955]: I1128 07:15:31.778298 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:31 crc kubenswrapper[4955]: I1128 07:15:31.778764 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:31 crc kubenswrapper[4955]: I1128 07:15:31.855305 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:31 crc kubenswrapper[4955]: I1128 07:15:31.927759 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:32 crc kubenswrapper[4955]: I1128 07:15:32.095918 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:33 crc kubenswrapper[4955]: I1128 07:15:33.813899 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49nzt" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="registry-server" containerID="cri-o://825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4" gracePeriod=2 Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.314713 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.403408 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities\") pod \"3005c275-414b-46b1-b8c4-613738e91224\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.403452 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content\") pod \"3005c275-414b-46b1-b8c4-613738e91224\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.403684 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7\") pod \"3005c275-414b-46b1-b8c4-613738e91224\" (UID: \"3005c275-414b-46b1-b8c4-613738e91224\") " Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.404269 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities" (OuterVolumeSpecName: "utilities") pod "3005c275-414b-46b1-b8c4-613738e91224" (UID: "3005c275-414b-46b1-b8c4-613738e91224"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.410562 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7" (OuterVolumeSpecName: "kube-api-access-9nnp7") pod "3005c275-414b-46b1-b8c4-613738e91224" (UID: "3005c275-414b-46b1-b8c4-613738e91224"). InnerVolumeSpecName "kube-api-access-9nnp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.506356 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nnp7\" (UniqueName: \"kubernetes.io/projected/3005c275-414b-46b1-b8c4-613738e91224-kube-api-access-9nnp7\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.506394 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.772104 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3005c275-414b-46b1-b8c4-613738e91224" (UID: "3005c275-414b-46b1-b8c4-613738e91224"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.811014 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3005c275-414b-46b1-b8c4-613738e91224-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.823080 4955 generic.go:334] "Generic (PLEG): container finished" podID="3005c275-414b-46b1-b8c4-613738e91224" containerID="825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4" exitCode=0 Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.823130 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerDied","Data":"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4"} Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.823161 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49nzt" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.823188 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49nzt" event={"ID":"3005c275-414b-46b1-b8c4-613738e91224","Type":"ContainerDied","Data":"510cc3ba9c366be8f56d423e2cfafb57f377b7e0b0b1c706059b8721eafeea94"} Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.823218 4955 scope.go:117] "RemoveContainer" containerID="825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.852146 4955 scope.go:117] "RemoveContainer" containerID="20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.858310 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.869893 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49nzt"] Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.904945 4955 scope.go:117] "RemoveContainer" containerID="cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.935621 4955 scope.go:117] "RemoveContainer" containerID="825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4" Nov 28 07:15:34 crc kubenswrapper[4955]: E1128 07:15:34.936094 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4\": container with ID starting with 825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4 not found: ID does not exist" containerID="825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.936123 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4"} err="failed to get container status \"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4\": rpc error: code = NotFound desc = could not find container \"825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4\": container with ID starting with 825a249ab61e9c8bf93b09c15de50fbfc47ec696bfe152a69e4c03c690c2dde4 not found: ID does not exist" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.936146 4955 scope.go:117] "RemoveContainer" containerID="20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd" Nov 28 07:15:34 crc kubenswrapper[4955]: E1128 07:15:34.936473 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd\": container with ID starting with 20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd not found: ID does not exist" containerID="20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.936546 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd"} err="failed to get container status \"20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd\": rpc error: code = NotFound desc = could not find container \"20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd\": container with ID starting with 20edb3316b125d6834659df8b18643931be30171ccd2e90dac4ec7fd7f3bbdcd not found: ID does not exist" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.936560 4955 scope.go:117] "RemoveContainer" containerID="cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646" Nov 28 07:15:34 crc kubenswrapper[4955]: E1128 07:15:34.936825 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646\": container with ID starting with cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646 not found: ID does not exist" containerID="cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646" Nov 28 07:15:34 crc kubenswrapper[4955]: I1128 07:15:34.936842 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646"} err="failed to get container status \"cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646\": rpc error: code = NotFound desc = could not find container \"cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646\": container with ID starting with cd23fb61fb89ea5e26bd9a746afd28f67b90faebd02bc633f7d3b05d7e470646 not found: ID does not exist" Nov 28 07:15:35 crc kubenswrapper[4955]: I1128 07:15:35.718277 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3005c275-414b-46b1-b8c4-613738e91224" path="/var/lib/kubelet/pods/3005c275-414b-46b1-b8c4-613738e91224/volumes" Nov 28 07:16:53 crc kubenswrapper[4955]: I1128 07:16:53.393313 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:16:53 crc kubenswrapper[4955]: I1128 07:16:53.393955 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:17:23 crc kubenswrapper[4955]: I1128 07:17:23.393305 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:17:23 crc kubenswrapper[4955]: I1128 07:17:23.394249 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:17:53 crc kubenswrapper[4955]: I1128 07:17:53.393185 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:17:53 crc kubenswrapper[4955]: I1128 07:17:53.393725 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:17:53 crc kubenswrapper[4955]: I1128 07:17:53.393775 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:17:53 crc kubenswrapper[4955]: I1128 07:17:53.394536 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:17:53 crc kubenswrapper[4955]: I1128 07:17:53.394599 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5" gracePeriod=600 Nov 28 07:17:54 crc kubenswrapper[4955]: I1128 07:17:54.271579 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5" exitCode=0 Nov 28 07:17:54 crc kubenswrapper[4955]: I1128 07:17:54.271668 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5"} Nov 28 07:17:54 crc kubenswrapper[4955]: I1128 07:17:54.272577 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42"} Nov 28 07:17:54 crc kubenswrapper[4955]: I1128 07:17:54.272643 4955 scope.go:117] "RemoveContainer" containerID="b03b9b8dcf7dba706faa89d659202f2fb7719f7dbb86ccbb5606a3d99870702f" Nov 28 07:18:11 crc kubenswrapper[4955]: I1128 07:18:11.464710 4955 generic.go:334] "Generic (PLEG): container finished" podID="81ccd45f-3f32-4e86-8874-0468a6fc2471" containerID="62d2b76a5bd85fe105df65b2b77cf9506efe62ea05d86252c3dca1bcfd73aff9" exitCode=0 Nov 28 07:18:11 crc kubenswrapper[4955]: I1128 07:18:11.464794 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81ccd45f-3f32-4e86-8874-0468a6fc2471","Type":"ContainerDied","Data":"62d2b76a5bd85fe105df65b2b77cf9506efe62ea05d86252c3dca1bcfd73aff9"} Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.206263 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.386870 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387331 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52vjp\" (UniqueName: \"kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387387 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387461 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387735 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387822 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387927 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.387981 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.388043 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary\") pod \"81ccd45f-3f32-4e86-8874-0468a6fc2471\" (UID: \"81ccd45f-3f32-4e86-8874-0468a6fc2471\") " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.388156 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data" (OuterVolumeSpecName: "config-data") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.389091 4955 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.389698 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.398042 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.401698 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp" (OuterVolumeSpecName: "kube-api-access-52vjp") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "kube-api-access-52vjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.401948 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.442699 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.450720 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.460977 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.462931 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "81ccd45f-3f32-4e86-8874-0468a6fc2471" (UID: "81ccd45f-3f32-4e86-8874-0468a6fc2471"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.490997 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491035 4955 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491046 4955 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491056 4955 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/81ccd45f-3f32-4e86-8874-0468a6fc2471-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491089 4955 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491099 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52vjp\" (UniqueName: \"kubernetes.io/projected/81ccd45f-3f32-4e86-8874-0468a6fc2471-kube-api-access-52vjp\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491109 4955 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.491117 4955 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/81ccd45f-3f32-4e86-8874-0468a6fc2471-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.493249 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"81ccd45f-3f32-4e86-8874-0468a6fc2471","Type":"ContainerDied","Data":"4fc74c2740e1d7d7eaa03a9b751cec5a50c86036f9193da8c33657623c49ce01"} Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.493294 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.493297 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc74c2740e1d7d7eaa03a9b751cec5a50c86036f9193da8c33657623c49ce01" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.527953 4955 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 07:18:13 crc kubenswrapper[4955]: I1128 07:18:13.592726 4955 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.538888 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 07:18:17 crc kubenswrapper[4955]: E1128 07:18:17.539842 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="registry-server" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.539858 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="registry-server" Nov 28 07:18:17 crc kubenswrapper[4955]: E1128 07:18:17.539883 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ccd45f-3f32-4e86-8874-0468a6fc2471" containerName="tempest-tests-tempest-tests-runner" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.539892 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ccd45f-3f32-4e86-8874-0468a6fc2471" containerName="tempest-tests-tempest-tests-runner" Nov 28 07:18:17 crc kubenswrapper[4955]: E1128 07:18:17.539922 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="extract-utilities" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.539930 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="extract-utilities" Nov 28 07:18:17 crc kubenswrapper[4955]: E1128 07:18:17.539950 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="extract-content" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.539958 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="extract-content" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.540162 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ccd45f-3f32-4e86-8874-0468a6fc2471" containerName="tempest-tests-tempest-tests-runner" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.540197 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="3005c275-414b-46b1-b8c4-613738e91224" containerName="registry-server" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.540915 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.543057 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47mgs" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.550894 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.684124 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlmn\" (UniqueName: \"kubernetes.io/projected/11625f83-961b-4c79-aa1a-d8d9fe1c6bf1-kube-api-access-zvlmn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.684558 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.786953 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.787349 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlmn\" (UniqueName: \"kubernetes.io/projected/11625f83-961b-4c79-aa1a-d8d9fe1c6bf1-kube-api-access-zvlmn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.787423 4955 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.819367 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlmn\" (UniqueName: \"kubernetes.io/projected/11625f83-961b-4c79-aa1a-d8d9fe1c6bf1-kube-api-access-zvlmn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.835892 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.872137 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-47mgs" Nov 28 07:18:17 crc kubenswrapper[4955]: I1128 07:18:17.881112 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 07:18:18 crc kubenswrapper[4955]: I1128 07:18:18.388568 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 07:18:18 crc kubenswrapper[4955]: I1128 07:18:18.587039 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1","Type":"ContainerStarted","Data":"a72f7b3115619d158e8af91ef945097bb70c8d778bdaf7e5c4ea2d22ef8c0d07"} Nov 28 07:18:20 crc kubenswrapper[4955]: I1128 07:18:20.623285 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"11625f83-961b-4c79-aa1a-d8d9fe1c6bf1","Type":"ContainerStarted","Data":"b7bd898895040f1fce4570230bb2bd18710211f9de0ad2c127c57262082e3331"} Nov 28 07:18:20 crc kubenswrapper[4955]: I1128 07:18:20.648492 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.638493079 podStartE2EDuration="3.648470552s" podCreationTimestamp="2025-11-28 07:18:17 +0000 UTC" firstStartedPulling="2025-11-28 07:18:18.389047186 +0000 UTC m=+3420.978302766" lastFinishedPulling="2025-11-28 07:18:19.399024669 +0000 UTC m=+3421.988280239" observedRunningTime="2025-11-28 07:18:20.640583226 +0000 UTC m=+3423.229838816" watchObservedRunningTime="2025-11-28 07:18:20.648470552 +0000 UTC m=+3423.237726132" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.035151 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd467/must-gather-94qtx"] Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.037731 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.044693 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd467"/"openshift-service-ca.crt" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.058104 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bd467"/"kube-root-ca.crt" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.096029 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd467/must-gather-94qtx"] Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.176128 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h57th\" (UniqueName: \"kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.176449 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.278539 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.278661 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h57th\" (UniqueName: \"kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.279049 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.297362 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h57th\" (UniqueName: \"kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th\") pod \"must-gather-94qtx\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.365176 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.822440 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bd467/must-gather-94qtx"] Nov 28 07:18:42 crc kubenswrapper[4955]: W1128 07:18:42.827938 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d2577be_66cf_4b47_83ae_f9ab7b99ffd9.slice/crio-aef40f45408fa9c492ca633d6e70f39fd90901f5ca4f3b312e30908a2efb665c WatchSource:0}: Error finding container aef40f45408fa9c492ca633d6e70f39fd90901f5ca4f3b312e30908a2efb665c: Status 404 returned error can't find the container with id aef40f45408fa9c492ca633d6e70f39fd90901f5ca4f3b312e30908a2efb665c Nov 28 07:18:42 crc kubenswrapper[4955]: I1128 07:18:42.843924 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/must-gather-94qtx" event={"ID":"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9","Type":"ContainerStarted","Data":"aef40f45408fa9c492ca633d6e70f39fd90901f5ca4f3b312e30908a2efb665c"} Nov 28 07:18:49 crc kubenswrapper[4955]: I1128 07:18:49.926996 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/must-gather-94qtx" event={"ID":"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9","Type":"ContainerStarted","Data":"699973d3ccbb139edb26e2d0a842e77f68c1ad289e50784138c9165c1af88eed"} Nov 28 07:18:50 crc kubenswrapper[4955]: I1128 07:18:50.940255 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/must-gather-94qtx" event={"ID":"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9","Type":"ContainerStarted","Data":"54f6c29405df8f641d53166d0252d2ef4aa55094680c5681911e26c7f6592140"} Nov 28 07:18:50 crc kubenswrapper[4955]: I1128 07:18:50.970703 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd467/must-gather-94qtx" podStartSLOduration=2.343860642 podStartE2EDuration="8.970684497s" podCreationTimestamp="2025-11-28 07:18:42 +0000 UTC" firstStartedPulling="2025-11-28 07:18:42.831464956 +0000 UTC m=+3445.420720526" lastFinishedPulling="2025-11-28 07:18:49.458288811 +0000 UTC m=+3452.047544381" observedRunningTime="2025-11-28 07:18:50.962674678 +0000 UTC m=+3453.551930268" watchObservedRunningTime="2025-11-28 07:18:50.970684497 +0000 UTC m=+3453.559940077" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.184740 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd467/crc-debug-j7m7q"] Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.186845 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.190454 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd467"/"default-dockercfg-rnrr9" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.297828 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.297954 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278mt\" (UniqueName: \"kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.399021 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278mt\" (UniqueName: \"kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.399199 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.399330 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.424722 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278mt\" (UniqueName: \"kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt\") pod \"crc-debug-j7m7q\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.507193 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:18:53 crc kubenswrapper[4955]: W1128 07:18:53.538599 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27eec3d4_484c_449d_8b47_b1c23bc3a5bd.slice/crio-1d3152ff19c4ff714dbd870f5c7cb6322010612cbeb1699812c3145348170eed WatchSource:0}: Error finding container 1d3152ff19c4ff714dbd870f5c7cb6322010612cbeb1699812c3145348170eed: Status 404 returned error can't find the container with id 1d3152ff19c4ff714dbd870f5c7cb6322010612cbeb1699812c3145348170eed Nov 28 07:18:53 crc kubenswrapper[4955]: I1128 07:18:53.965407 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-j7m7q" event={"ID":"27eec3d4-484c-449d-8b47-b1c23bc3a5bd","Type":"ContainerStarted","Data":"1d3152ff19c4ff714dbd870f5c7cb6322010612cbeb1699812c3145348170eed"} Nov 28 07:19:06 crc kubenswrapper[4955]: I1128 07:19:06.077990 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-j7m7q" event={"ID":"27eec3d4-484c-449d-8b47-b1c23bc3a5bd","Type":"ContainerStarted","Data":"2c894d96311810a718b5113d743a3b53ad8701ef2f33831994d10a5336159b2a"} Nov 28 07:19:06 crc kubenswrapper[4955]: I1128 07:19:06.104526 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd467/crc-debug-j7m7q" podStartSLOduration=1.400217376 podStartE2EDuration="13.104330119s" podCreationTimestamp="2025-11-28 07:18:53 +0000 UTC" firstStartedPulling="2025-11-28 07:18:53.540954815 +0000 UTC m=+3456.130210385" lastFinishedPulling="2025-11-28 07:19:05.245067558 +0000 UTC m=+3467.834323128" observedRunningTime="2025-11-28 07:19:06.088516947 +0000 UTC m=+3468.677772527" watchObservedRunningTime="2025-11-28 07:19:06.104330119 +0000 UTC m=+3468.693585729" Nov 28 07:19:43 crc kubenswrapper[4955]: I1128 07:19:43.396208 4955 generic.go:334] "Generic (PLEG): container finished" podID="27eec3d4-484c-449d-8b47-b1c23bc3a5bd" containerID="2c894d96311810a718b5113d743a3b53ad8701ef2f33831994d10a5336159b2a" exitCode=0 Nov 28 07:19:43 crc kubenswrapper[4955]: I1128 07:19:43.396238 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-j7m7q" event={"ID":"27eec3d4-484c-449d-8b47-b1c23bc3a5bd","Type":"ContainerDied","Data":"2c894d96311810a718b5113d743a3b53ad8701ef2f33831994d10a5336159b2a"} Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.533319 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.588254 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd467/crc-debug-j7m7q"] Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.601926 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd467/crc-debug-j7m7q"] Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.720551 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host\") pod \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.720964 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278mt\" (UniqueName: \"kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt\") pod \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\" (UID: \"27eec3d4-484c-449d-8b47-b1c23bc3a5bd\") " Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.720660 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host" (OuterVolumeSpecName: "host") pod "27eec3d4-484c-449d-8b47-b1c23bc3a5bd" (UID: "27eec3d4-484c-449d-8b47-b1c23bc3a5bd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.721829 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.734353 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt" (OuterVolumeSpecName: "kube-api-access-278mt") pod "27eec3d4-484c-449d-8b47-b1c23bc3a5bd" (UID: "27eec3d4-484c-449d-8b47-b1c23bc3a5bd"). InnerVolumeSpecName "kube-api-access-278mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:44 crc kubenswrapper[4955]: I1128 07:19:44.823877 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278mt\" (UniqueName: \"kubernetes.io/projected/27eec3d4-484c-449d-8b47-b1c23bc3a5bd-kube-api-access-278mt\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.423657 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3152ff19c4ff714dbd870f5c7cb6322010612cbeb1699812c3145348170eed" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.423787 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-j7m7q" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.723043 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27eec3d4-484c-449d-8b47-b1c23bc3a5bd" path="/var/lib/kubelet/pods/27eec3d4-484c-449d-8b47-b1c23bc3a5bd/volumes" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.768289 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd467/crc-debug-q77tc"] Nov 28 07:19:45 crc kubenswrapper[4955]: E1128 07:19:45.768992 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27eec3d4-484c-449d-8b47-b1c23bc3a5bd" containerName="container-00" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.769033 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="27eec3d4-484c-449d-8b47-b1c23bc3a5bd" containerName="container-00" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.769423 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="27eec3d4-484c-449d-8b47-b1c23bc3a5bd" containerName="container-00" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.770470 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.772962 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd467"/"default-dockercfg-rnrr9" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.946814 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:45 crc kubenswrapper[4955]: I1128 07:19:45.947228 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9spvh\" (UniqueName: \"kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.048949 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.049025 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9spvh\" (UniqueName: \"kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.049230 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.068659 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9spvh\" (UniqueName: \"kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh\") pod \"crc-debug-q77tc\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.105363 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.434997 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-q77tc" event={"ID":"7afc8424-bae6-4ade-804d-64d33d7437b5","Type":"ContainerStarted","Data":"6e437d48c4adc71926173f1fb7c760121a0f14e9041eb6caff88d1f50a7d9015"} Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.435431 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-q77tc" event={"ID":"7afc8424-bae6-4ade-804d-64d33d7437b5","Type":"ContainerStarted","Data":"612709989f0a260622f7ee849e512b697d0b089bed3b7f16bffe86bde22aafb9"} Nov 28 07:19:46 crc kubenswrapper[4955]: I1128 07:19:46.453014 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bd467/crc-debug-q77tc" podStartSLOduration=1.452996782 podStartE2EDuration="1.452996782s" podCreationTimestamp="2025-11-28 07:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:19:46.450742028 +0000 UTC m=+3509.040009648" watchObservedRunningTime="2025-11-28 07:19:46.452996782 +0000 UTC m=+3509.042252352" Nov 28 07:19:47 crc kubenswrapper[4955]: I1128 07:19:47.447017 4955 generic.go:334] "Generic (PLEG): container finished" podID="7afc8424-bae6-4ade-804d-64d33d7437b5" containerID="6e437d48c4adc71926173f1fb7c760121a0f14e9041eb6caff88d1f50a7d9015" exitCode=0 Nov 28 07:19:47 crc kubenswrapper[4955]: I1128 07:19:47.447140 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-q77tc" event={"ID":"7afc8424-bae6-4ade-804d-64d33d7437b5","Type":"ContainerDied","Data":"6e437d48c4adc71926173f1fb7c760121a0f14e9041eb6caff88d1f50a7d9015"} Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.550926 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.615862 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd467/crc-debug-q77tc"] Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.628965 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd467/crc-debug-q77tc"] Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.703887 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host\") pod \"7afc8424-bae6-4ade-804d-64d33d7437b5\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.703988 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host" (OuterVolumeSpecName: "host") pod "7afc8424-bae6-4ade-804d-64d33d7437b5" (UID: "7afc8424-bae6-4ade-804d-64d33d7437b5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.704332 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9spvh\" (UniqueName: \"kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh\") pod \"7afc8424-bae6-4ade-804d-64d33d7437b5\" (UID: \"7afc8424-bae6-4ade-804d-64d33d7437b5\") " Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.704773 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7afc8424-bae6-4ade-804d-64d33d7437b5-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.709645 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh" (OuterVolumeSpecName: "kube-api-access-9spvh") pod "7afc8424-bae6-4ade-804d-64d33d7437b5" (UID: "7afc8424-bae6-4ade-804d-64d33d7437b5"). InnerVolumeSpecName "kube-api-access-9spvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:48 crc kubenswrapper[4955]: I1128 07:19:48.807603 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9spvh\" (UniqueName: \"kubernetes.io/projected/7afc8424-bae6-4ade-804d-64d33d7437b5-kube-api-access-9spvh\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.476841 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="612709989f0a260622f7ee849e512b697d0b089bed3b7f16bffe86bde22aafb9" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.476895 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-q77tc" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.718665 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afc8424-bae6-4ade-804d-64d33d7437b5" path="/var/lib/kubelet/pods/7afc8424-bae6-4ade-804d-64d33d7437b5/volumes" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.801159 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bd467/crc-debug-hkw84"] Nov 28 07:19:49 crc kubenswrapper[4955]: E1128 07:19:49.801596 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7afc8424-bae6-4ade-804d-64d33d7437b5" containerName="container-00" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.801619 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="7afc8424-bae6-4ade-804d-64d33d7437b5" containerName="container-00" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.801840 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="7afc8424-bae6-4ade-804d-64d33d7437b5" containerName="container-00" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.802432 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.804204 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bd467"/"default-dockercfg-rnrr9" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.929841 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:49 crc kubenswrapper[4955]: I1128 07:19:49.930392 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhj2\" (UniqueName: \"kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.032648 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.032846 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.033408 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhj2\" (UniqueName: \"kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.057088 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhj2\" (UniqueName: \"kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2\") pod \"crc-debug-hkw84\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.130339 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:50 crc kubenswrapper[4955]: W1128 07:19:50.158540 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod014f5f24_ac65_41d1_a36a_b9400ef37215.slice/crio-68e44ce4829c91ba92d7b9dcf59e3f414b3798d813ecb7b5b1bda65976b9937a WatchSource:0}: Error finding container 68e44ce4829c91ba92d7b9dcf59e3f414b3798d813ecb7b5b1bda65976b9937a: Status 404 returned error can't find the container with id 68e44ce4829c91ba92d7b9dcf59e3f414b3798d813ecb7b5b1bda65976b9937a Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.487468 4955 generic.go:334] "Generic (PLEG): container finished" podID="014f5f24-ac65-41d1-a36a-b9400ef37215" containerID="6fb83488f25cd388fed43bd3d5963adf6d208b1d466f32fdb6a8790674d281d9" exitCode=0 Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.487561 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-hkw84" event={"ID":"014f5f24-ac65-41d1-a36a-b9400ef37215","Type":"ContainerDied","Data":"6fb83488f25cd388fed43bd3d5963adf6d208b1d466f32fdb6a8790674d281d9"} Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.487901 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/crc-debug-hkw84" event={"ID":"014f5f24-ac65-41d1-a36a-b9400ef37215","Type":"ContainerStarted","Data":"68e44ce4829c91ba92d7b9dcf59e3f414b3798d813ecb7b5b1bda65976b9937a"} Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.533474 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd467/crc-debug-hkw84"] Nov 28 07:19:50 crc kubenswrapper[4955]: I1128 07:19:50.543785 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd467/crc-debug-hkw84"] Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.612417 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.765219 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhj2\" (UniqueName: \"kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2\") pod \"014f5f24-ac65-41d1-a36a-b9400ef37215\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.765631 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host\") pod \"014f5f24-ac65-41d1-a36a-b9400ef37215\" (UID: \"014f5f24-ac65-41d1-a36a-b9400ef37215\") " Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.765702 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host" (OuterVolumeSpecName: "host") pod "014f5f24-ac65-41d1-a36a-b9400ef37215" (UID: "014f5f24-ac65-41d1-a36a-b9400ef37215"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.766245 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/014f5f24-ac65-41d1-a36a-b9400ef37215-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.779728 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2" (OuterVolumeSpecName: "kube-api-access-krhj2") pod "014f5f24-ac65-41d1-a36a-b9400ef37215" (UID: "014f5f24-ac65-41d1-a36a-b9400ef37215"). InnerVolumeSpecName "kube-api-access-krhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:19:51 crc kubenswrapper[4955]: I1128 07:19:51.869175 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhj2\" (UniqueName: \"kubernetes.io/projected/014f5f24-ac65-41d1-a36a-b9400ef37215-kube-api-access-krhj2\") on node \"crc\" DevicePath \"\"" Nov 28 07:19:52 crc kubenswrapper[4955]: I1128 07:19:52.509499 4955 scope.go:117] "RemoveContainer" containerID="6fb83488f25cd388fed43bd3d5963adf6d208b1d466f32fdb6a8790674d281d9" Nov 28 07:19:52 crc kubenswrapper[4955]: I1128 07:19:52.509546 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/crc-debug-hkw84" Nov 28 07:19:53 crc kubenswrapper[4955]: I1128 07:19:53.393532 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:19:53 crc kubenswrapper[4955]: I1128 07:19:53.393962 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:19:53 crc kubenswrapper[4955]: I1128 07:19:53.717873 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014f5f24-ac65-41d1-a36a-b9400ef37215" path="/var/lib/kubelet/pods/014f5f24-ac65-41d1-a36a-b9400ef37215/volumes" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.040830 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b4956ccd4-8qnhx_eb37217e-f20a-4e50-b616-b0b1231fbd89/barbican-api/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.198393 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b4956ccd4-8qnhx_eb37217e-f20a-4e50-b616-b0b1231fbd89/barbican-api-log/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.223328 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6985c74bc8-qgvjf_f30cb01b-f625-4031-98a0-272f85d43a81/barbican-keystone-listener/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.277346 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6985c74bc8-qgvjf_f30cb01b-f625-4031-98a0-272f85d43a81/barbican-keystone-listener-log/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.411378 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5dfcd47cfc-s75nx_24535783-21c6-4550-965e-7fd84038058b/barbican-worker/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.453603 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5dfcd47cfc-s75nx_24535783-21c6-4550-965e-7fd84038058b/barbican-worker-log/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.650523 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs_be0906bb-475c-4229-9a9f-9a5361e6172e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.667901 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/ceilometer-central-agent/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.775620 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/ceilometer-notification-agent/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.865021 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/proxy-httpd/0.log" Nov 28 07:20:06 crc kubenswrapper[4955]: I1128 07:20:06.914454 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/sg-core/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.025673 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdcb880-5f80-4347-81ef-f9f5ff9a097b/cinder-api/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.077538 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdcb880-5f80-4347-81ef-f9f5ff9a097b/cinder-api-log/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.184914 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7f74ee90-8d6d-42f1-8aa7-61d06d62f07c/cinder-scheduler/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.246231 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7f74ee90-8d6d-42f1-8aa7-61d06d62f07c/probe/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.427360 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9_40082d1e-0844-4d3d-9c68-25fb8eb44351/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.463214 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-b567k_3efdabfd-7ad3-4586-8398-97512113e085/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.599091 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/init/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.859660 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/init/0.log" Nov 28 07:20:07 crc kubenswrapper[4955]: I1128 07:20:07.957936 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/dnsmasq-dns/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.062392 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq_40e141ea-e10b-4e62-a075-da26dee75286/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.240793 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_02ab7a37-574b-4e32-bc8a-c5dd638a6a45/glance-log/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.246970 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_02ab7a37-574b-4e32-bc8a-c5dd638a6a45/glance-httpd/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.433708 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_104ece36-bc05-45c5-984c-55d61b6ebe8b/glance-httpd/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.457198 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_104ece36-bc05-45c5-984c-55d61b6ebe8b/glance-log/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.727875 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5q92s_93204339-2c92-4d5d-a519-402ee3a45e79/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.746722 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56f45c5b6-nqg9b_0540bb1f-c904-4b07-acda-ce47d0bdfa7c/horizon/0.log" Nov 28 07:20:08 crc kubenswrapper[4955]: I1128 07:20:08.966965 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2jx76_86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.009116 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56f45c5b6-nqg9b_0540bb1f-c904-4b07-acda-ce47d0bdfa7c/horizon-log/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.214544 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405221-4mmm4_0b8ffa87-03b1-4df9-a491-15db50f8a75e/keystone-cron/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.309439 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766d6648f9-vfvxt_374d1f5d-9bd1-4362-a245-97f658097965/keystone-api/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.423281 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ab74c890-3754-4fdb-84ab-0884ae7ca237/kube-state-metrics/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.501091 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc_845c1878-1788-4409-bbd8-a76a2f3eed71/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.923183 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6478fb8469-kzjkp_8716e967-61aa-43b9-9d68-cb6699c5c673/neutron-httpd/0.log" Nov 28 07:20:09 crc kubenswrapper[4955]: I1128 07:20:09.980731 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6478fb8469-kzjkp_8716e967-61aa-43b9-9d68-cb6699c5c673/neutron-api/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.006248 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc_17b265c1-83dd-4a5c-9e5b-92923c919d1d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.513216 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1effeb5c-c81a-43ff-8624-9c077f2484a3/nova-cell0-conductor-conductor/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.527412 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_03bbb794-571b-4980-8445-7766a14bb5c9/nova-api-log/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.704447 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_03bbb794-571b-4980-8445-7766a14bb5c9/nova-api-api/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.829881 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d076178-18c0-42af-b40b-3cc8f1cb77cb/nova-cell1-conductor-conductor/0.log" Nov 28 07:20:10 crc kubenswrapper[4955]: I1128 07:20:10.893922 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fe4d7165-1010-42a5-a707-257169437be1/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.106149 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-db78m_0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.189764 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_932b3fc6-dd61-4bcd-9836-f04de5a42ee7/nova-metadata-log/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.576218 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_61ac9549-3394-4586-ae7d-afede69f862c/nova-scheduler-scheduler/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.714865 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/mysql-bootstrap/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.866989 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/mysql-bootstrap/0.log" Nov 28 07:20:11 crc kubenswrapper[4955]: I1128 07:20:11.936481 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/galera/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.076643 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/mysql-bootstrap/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.273362 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/mysql-bootstrap/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.318230 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/galera/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.402582 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_932b3fc6-dd61-4bcd-9836-f04de5a42ee7/nova-metadata-metadata/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.464548 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_98ccf66c-347b-4fbe-9b2e-974e15e3eea7/openstackclient/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.528270 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqt82_3543aa49-473d-4e57-a9eb-edbca5c7f58d/openstack-network-exporter/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.674829 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server-init/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.884156 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server-init/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.935604 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server/0.log" Nov 28 07:20:12 crc kubenswrapper[4955]: I1128 07:20:12.941843 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovs-vswitchd/0.log" Nov 28 07:20:13 crc kubenswrapper[4955]: I1128 07:20:13.113640 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p2bvh_3963971f-dccf-42a8-9889-b5e122ee6809/ovn-controller/0.log" Nov 28 07:20:13 crc kubenswrapper[4955]: I1128 07:20:13.463380 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88f30640-ea6e-4479-b4ab-4e21f96f7ddb/openstack-network-exporter/0.log" Nov 28 07:20:13 crc kubenswrapper[4955]: I1128 07:20:13.614040 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88f30640-ea6e-4479-b4ab-4e21f96f7ddb/ovn-northd/0.log" Nov 28 07:20:13 crc kubenswrapper[4955]: I1128 07:20:13.745426 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dmn6w_9e473921-1378-4318-89ef-7f2f39c41aed/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:13 crc kubenswrapper[4955]: I1128 07:20:13.934077 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_184e43b4-9c7c-4df1-b1a7-503ef8139459/openstack-network-exporter/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.017161 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_184e43b4-9c7c-4df1-b1a7-503ef8139459/ovsdbserver-nb/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.077125 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1ba1430-28cb-4bba-936d-00e8988eab09/openstack-network-exporter/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.228158 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1ba1430-28cb-4bba-936d-00e8988eab09/ovsdbserver-sb/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.347139 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d446689d4-fvjm6_0a8c9e11-5611-4739-9a2c-24ad016682c0/placement-api/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.435710 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/setup-container/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.491984 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d446689d4-fvjm6_0a8c9e11-5611-4739-9a2c-24ad016682c0/placement-log/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.720254 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/rabbitmq/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.774786 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/setup-container/0.log" Nov 28 07:20:14 crc kubenswrapper[4955]: I1128 07:20:14.775381 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/setup-container/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.035715 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/setup-container/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.046611 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/rabbitmq/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.099413 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc_6477c9e8-dda5-46fe-8b80-3ccc99f2b00d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.299188 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9_54f3d846-b19a-415e-93bb-9f4c1a3e02dc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.367890 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-kk4kh_916114e1-c9f4-45af-acbd-14fa82b380ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.497143 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pzmhq_41f03c76-5015-4f05-bf3d-0c21610c1a50/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.621081 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hqrs6_6788af76-b07b-492d-b4bb-dceb2d35b853/ssh-known-hosts-edpm-deployment/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.815415 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d59886dc-t4pgs_91783657-7b6c-4053-9c14-aed825d54a73/proxy-server/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.895802 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hf68n_f8cdb34d-d310-43c4-bdcd-83e12752f6ea/swift-ring-rebalance/0.log" Nov 28 07:20:15 crc kubenswrapper[4955]: I1128 07:20:15.903065 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d59886dc-t4pgs_91783657-7b6c-4053-9c14-aed825d54a73/proxy-httpd/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.055129 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-auditor/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.085153 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-reaper/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.164723 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-replicator/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.243653 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-server/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.252875 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-auditor/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.335908 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-replicator/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.344481 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-server/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.424612 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-updater/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.455669 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-auditor/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.539365 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-expirer/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.568946 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-replicator/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.653797 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-server/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.670613 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-updater/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.747184 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/rsync/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.828661 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/swift-recon-cron/0.log" Nov 28 07:20:16 crc kubenswrapper[4955]: I1128 07:20:16.945436 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb_d7286bef-2382-464e-95fa-61654cead41d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:17 crc kubenswrapper[4955]: I1128 07:20:17.071027 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_81ccd45f-3f32-4e86-8874-0468a6fc2471/tempest-tests-tempest-tests-runner/0.log" Nov 28 07:20:17 crc kubenswrapper[4955]: I1128 07:20:17.189353 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_11625f83-961b-4c79-aa1a-d8d9fe1c6bf1/test-operator-logs-container/0.log" Nov 28 07:20:17 crc kubenswrapper[4955]: I1128 07:20:17.320859 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jck8m_a95ee68c-d5b2-490f-a4e4-33bb8bb56536/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:20:23 crc kubenswrapper[4955]: I1128 07:20:23.392456 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:20:23 crc kubenswrapper[4955]: I1128 07:20:23.393166 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:20:25 crc kubenswrapper[4955]: I1128 07:20:25.590181 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b8f1a214-823b-4a75-ada2-b5973ad7abd6/memcached/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.136215 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.295027 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.331756 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.359394 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.553063 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.571876 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/extract/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.584521 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.750789 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-5kt75_3e51ea77-cbc1-4ebd-9247-335d93211353/kube-rbac-proxy/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.813176 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-5kt75_3e51ea77-cbc1-4ebd-9247-335d93211353/manager/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.819147 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-p5n27_813a8c4e-06bd-467e-9b80-0e3e88fb361a/kube-rbac-proxy/0.log" Nov 28 07:20:41 crc kubenswrapper[4955]: I1128 07:20:41.984119 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-p5n27_813a8c4e-06bd-467e-9b80-0e3e88fb361a/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.056411 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-8cmf7_ef549437-6bef-428a-991f-b38cc613ec1e/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.062880 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-8cmf7_ef549437-6bef-428a-991f-b38cc613ec1e/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.206812 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-jtxkh_8bcb6097-d2d8-4190-afbd-644daa5ce7b6/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.330129 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-jtxkh_8bcb6097-d2d8-4190-afbd-644daa5ce7b6/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.390073 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-lgnvq_d2c0d9ce-4c16-451d-948b-75ae7bbca487/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.455499 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-lgnvq_d2c0d9ce-4c16-451d-948b-75ae7bbca487/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.552049 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-c7pkv_84c6c0d5-d427-471a-8a54-9d3fc28264bc/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.611059 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-c7pkv_84c6c0d5-d427-471a-8a54-9d3fc28264bc/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.699295 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rtv6s_3f5477af-57e8-4a83-95ce-9fea4d62e797/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.825777 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-mdrg9_d8ca8a28-b011-4a61-b37d-5f84543d63bb/kube-rbac-proxy/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.852708 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rtv6s_3f5477af-57e8-4a83-95ce-9fea4d62e797/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.950342 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-mdrg9_d8ca8a28-b011-4a61-b37d-5f84543d63bb/manager/0.log" Nov 28 07:20:42 crc kubenswrapper[4955]: I1128 07:20:42.984366 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-4pmmp_245721bd-2bc5-4f42-ac45-5ae0b07cd77e/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.128222 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-4pmmp_245721bd-2bc5-4f42-ac45-5ae0b07cd77e/manager/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.250115 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-mwk52_042d3c47-fa72-4e2f-a127-2885c81ec7e4/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.259323 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-mwk52_042d3c47-fa72-4e2f-a127-2885c81ec7e4/manager/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.406640 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-kdz89_d2d018b1-e591-4109-9b83-82bc60b2cb59/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.447985 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-kdz89_d2d018b1-e591-4109-9b83-82bc60b2cb59/manager/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.503732 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-fz6rt_4871d492-a015-4a2b-9f6a-62e15bfdb825/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.615869 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-fz6rt_4871d492-a015-4a2b-9f6a-62e15bfdb825/manager/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.637237 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hwmcq_801bb8d6-c107-48ad-b985-62e932b38992/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.803283 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hwmcq_801bb8d6-c107-48ad-b985-62e932b38992/manager/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.855962 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-f77r5_a9444b3d-85c5-4f44-953d-65a4dd2f30f2/kube-rbac-proxy/0.log" Nov 28 07:20:43 crc kubenswrapper[4955]: I1128 07:20:43.893755 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-f77r5_a9444b3d-85c5-4f44-953d-65a4dd2f30f2/manager/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.035293 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6_a1c5873f-0d08-4f51-aa91-822fc86a33e3/manager/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.048296 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6_a1c5873f-0d08-4f51-aa91-822fc86a33e3/kube-rbac-proxy/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.385950 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8f67c45-t6djq_a76c5381-15dd-479f-af8a-78a8c2ec2bad/operator/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.461932 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6zdst_739948d5-645f-4c91-9372-588a7128b7b2/registry-server/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.559671 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-qtcjk_f0d92863-0f89-415d-b4a3-24e09fb4ec02/kube-rbac-proxy/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.781732 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4csc4_5d9f654a-a223-4b91-93fd-301807c6f29a/kube-rbac-proxy/0.log" Nov 28 07:20:44 crc kubenswrapper[4955]: I1128 07:20:44.799104 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-qtcjk_f0d92863-0f89-415d-b4a3-24e09fb4ec02/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.096092 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4csc4_5d9f654a-a223-4b91-93fd-301807c6f29a/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.146012 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-n7jmb_5daef806-96c3-439c-85f9-f1ef27a8be0d/operator/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.156585 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bd7f7485b-zbpwx_84a07034-e21d-4e5b-a6ef-ba76d30b662a/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.272722 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-q4wgd_cfa54a97-6210-4566-bf61-c0c7720ec0ec/kube-rbac-proxy/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.336221 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-rvkg2_d11a80a8-9bba-491e-aa38-e93e59c3343e/kube-rbac-proxy/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.363123 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-q4wgd_cfa54a97-6210-4566-bf61-c0c7720ec0ec/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.437290 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-rvkg2_d11a80a8-9bba-491e-aa38-e93e59c3343e/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.531279 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-4vhhh_e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac/kube-rbac-proxy/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.537345 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-4vhhh_e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac/manager/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.611973 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mmqbs_0811185a-c49e-4a81-b6d7-c786f590177b/kube-rbac-proxy/0.log" Nov 28 07:20:45 crc kubenswrapper[4955]: I1128 07:20:45.652868 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mmqbs_0811185a-c49e-4a81-b6d7-c786f590177b/manager/0.log" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.393338 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.393774 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.393873 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.394775 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.394834 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" gracePeriod=600 Nov 28 07:20:53 crc kubenswrapper[4955]: E1128 07:20:53.520828 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.640382 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:20:53 crc kubenswrapper[4955]: E1128 07:20:53.641399 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014f5f24-ac65-41d1-a36a-b9400ef37215" containerName="container-00" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.641598 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="014f5f24-ac65-41d1-a36a-b9400ef37215" containerName="container-00" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.642204 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="014f5f24-ac65-41d1-a36a-b9400ef37215" containerName="container-00" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.645028 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.675614 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.714044 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.714492 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdfrp\" (UniqueName: \"kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.714709 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.816551 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.816619 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdfrp\" (UniqueName: \"kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.816669 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.817000 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.817059 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.836433 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdfrp\" (UniqueName: \"kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp\") pod \"redhat-operators-kb2r2\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:53 crc kubenswrapper[4955]: I1128 07:20:53.970046 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:20:54 crc kubenswrapper[4955]: I1128 07:20:54.090870 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" exitCode=0 Nov 28 07:20:54 crc kubenswrapper[4955]: I1128 07:20:54.091138 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42"} Nov 28 07:20:54 crc kubenswrapper[4955]: I1128 07:20:54.091177 4955 scope.go:117] "RemoveContainer" containerID="13abcec15b6c79655d494ebee90b7d6c3e3f67982eede961271cbde86fce42b5" Nov 28 07:20:54 crc kubenswrapper[4955]: I1128 07:20:54.091922 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:20:54 crc kubenswrapper[4955]: E1128 07:20:54.092213 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:20:54 crc kubenswrapper[4955]: I1128 07:20:54.485218 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:20:55 crc kubenswrapper[4955]: I1128 07:20:55.104999 4955 generic.go:334] "Generic (PLEG): container finished" podID="f147322e-8414-4565-8d52-5566735ac784" containerID="a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f" exitCode=0 Nov 28 07:20:55 crc kubenswrapper[4955]: I1128 07:20:55.105046 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerDied","Data":"a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f"} Nov 28 07:20:55 crc kubenswrapper[4955]: I1128 07:20:55.105329 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerStarted","Data":"f737c6a032fe97a5864f9bca273f54773015c8023c0659ece36cec86bd534fde"} Nov 28 07:20:55 crc kubenswrapper[4955]: I1128 07:20:55.107247 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:20:57 crc kubenswrapper[4955]: I1128 07:20:57.131356 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerStarted","Data":"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784"} Nov 28 07:20:59 crc kubenswrapper[4955]: I1128 07:20:59.150915 4955 generic.go:334] "Generic (PLEG): container finished" podID="f147322e-8414-4565-8d52-5566735ac784" containerID="a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784" exitCode=0 Nov 28 07:20:59 crc kubenswrapper[4955]: I1128 07:20:59.151000 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerDied","Data":"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784"} Nov 28 07:21:00 crc kubenswrapper[4955]: I1128 07:21:00.163731 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerStarted","Data":"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048"} Nov 28 07:21:00 crc kubenswrapper[4955]: I1128 07:21:00.197620 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kb2r2" podStartSLOduration=2.542476663 podStartE2EDuration="7.197594772s" podCreationTimestamp="2025-11-28 07:20:53 +0000 UTC" firstStartedPulling="2025-11-28 07:20:55.106991628 +0000 UTC m=+3577.696247208" lastFinishedPulling="2025-11-28 07:20:59.762109747 +0000 UTC m=+3582.351365317" observedRunningTime="2025-11-28 07:21:00.190877642 +0000 UTC m=+3582.780133222" watchObservedRunningTime="2025-11-28 07:21:00.197594772 +0000 UTC m=+3582.786850352" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.025016 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.038644 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.038744 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.164874 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.164992 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ghw\" (UniqueName: \"kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.165017 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.267032 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ghw\" (UniqueName: \"kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.267107 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.267266 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.268058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.268092 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.293188 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ghw\" (UniqueName: \"kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw\") pod \"certified-operators-6rjgj\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.357872 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:01 crc kubenswrapper[4955]: I1128 07:21:01.833277 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:02 crc kubenswrapper[4955]: I1128 07:21:02.180412 4955 generic.go:334] "Generic (PLEG): container finished" podID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerID="58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692" exitCode=0 Nov 28 07:21:02 crc kubenswrapper[4955]: I1128 07:21:02.180488 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerDied","Data":"58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692"} Nov 28 07:21:02 crc kubenswrapper[4955]: I1128 07:21:02.180702 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerStarted","Data":"5dcd8ca2f157140d08d1214644b3f4072ecc969453be9a5f66a98b4b40ba212f"} Nov 28 07:21:03 crc kubenswrapper[4955]: I1128 07:21:03.970695 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:03 crc kubenswrapper[4955]: I1128 07:21:03.971155 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:04 crc kubenswrapper[4955]: I1128 07:21:04.200291 4955 generic.go:334] "Generic (PLEG): container finished" podID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerID="82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6" exitCode=0 Nov 28 07:21:04 crc kubenswrapper[4955]: I1128 07:21:04.200380 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerDied","Data":"82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6"} Nov 28 07:21:04 crc kubenswrapper[4955]: I1128 07:21:04.340651 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gn2ld_c9a5dd12-fb17-4fab-b1f9-9a005cc2877a/control-plane-machine-set-operator/0.log" Nov 28 07:21:04 crc kubenswrapper[4955]: I1128 07:21:04.556383 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c6dx_4b20c134-37f2-42c2-be5f-d6f4a86d7b10/machine-api-operator/0.log" Nov 28 07:21:04 crc kubenswrapper[4955]: I1128 07:21:04.560697 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c6dx_4b20c134-37f2-42c2-be5f-d6f4a86d7b10/kube-rbac-proxy/0.log" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.022549 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kb2r2" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="registry-server" probeResult="failure" output=< Nov 28 07:21:05 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 07:21:05 crc kubenswrapper[4955]: > Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.627046 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.629574 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.642406 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.796757 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.796890 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrqkr\" (UniqueName: \"kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.796940 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.898282 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.898675 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.898854 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrqkr\" (UniqueName: \"kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.899998 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.900108 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:05 crc kubenswrapper[4955]: I1128 07:21:05.923736 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrqkr\" (UniqueName: \"kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr\") pod \"redhat-marketplace-rp6ld\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:06 crc kubenswrapper[4955]: I1128 07:21:06.013551 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:06 crc kubenswrapper[4955]: I1128 07:21:06.241843 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerStarted","Data":"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b"} Nov 28 07:21:06 crc kubenswrapper[4955]: I1128 07:21:06.267058 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rjgj" podStartSLOduration=2.971699423 podStartE2EDuration="6.267039071s" podCreationTimestamp="2025-11-28 07:21:00 +0000 UTC" firstStartedPulling="2025-11-28 07:21:02.181864283 +0000 UTC m=+3584.771119853" lastFinishedPulling="2025-11-28 07:21:05.477203931 +0000 UTC m=+3588.066459501" observedRunningTime="2025-11-28 07:21:06.266750243 +0000 UTC m=+3588.856005813" watchObservedRunningTime="2025-11-28 07:21:06.267039071 +0000 UTC m=+3588.856294641" Nov 28 07:21:06 crc kubenswrapper[4955]: W1128 07:21:06.497981 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a5b36b_fe01_4916_a830_0f1908c21875.slice/crio-0cfce8c80f60c6303063e3fdd4c3ba739c7f85b13a9d30a26c49e686450db52f WatchSource:0}: Error finding container 0cfce8c80f60c6303063e3fdd4c3ba739c7f85b13a9d30a26c49e686450db52f: Status 404 returned error can't find the container with id 0cfce8c80f60c6303063e3fdd4c3ba739c7f85b13a9d30a26c49e686450db52f Nov 28 07:21:06 crc kubenswrapper[4955]: I1128 07:21:06.499196 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:07 crc kubenswrapper[4955]: I1128 07:21:07.252191 4955 generic.go:334] "Generic (PLEG): container finished" podID="26a5b36b-fe01-4916-a830-0f1908c21875" containerID="d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1" exitCode=0 Nov 28 07:21:07 crc kubenswrapper[4955]: I1128 07:21:07.252233 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerDied","Data":"d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1"} Nov 28 07:21:07 crc kubenswrapper[4955]: I1128 07:21:07.252279 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerStarted","Data":"0cfce8c80f60c6303063e3fdd4c3ba739c7f85b13a9d30a26c49e686450db52f"} Nov 28 07:21:08 crc kubenswrapper[4955]: I1128 07:21:08.262206 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerStarted","Data":"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810"} Nov 28 07:21:08 crc kubenswrapper[4955]: I1128 07:21:08.704748 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:21:08 crc kubenswrapper[4955]: E1128 07:21:08.705037 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:21:09 crc kubenswrapper[4955]: I1128 07:21:09.282487 4955 generic.go:334] "Generic (PLEG): container finished" podID="26a5b36b-fe01-4916-a830-0f1908c21875" containerID="4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810" exitCode=0 Nov 28 07:21:09 crc kubenswrapper[4955]: I1128 07:21:09.283265 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerDied","Data":"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810"} Nov 28 07:21:10 crc kubenswrapper[4955]: I1128 07:21:10.293717 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerStarted","Data":"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561"} Nov 28 07:21:10 crc kubenswrapper[4955]: I1128 07:21:10.320812 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rp6ld" podStartSLOduration=2.90550572 podStartE2EDuration="5.32078936s" podCreationTimestamp="2025-11-28 07:21:05 +0000 UTC" firstStartedPulling="2025-11-28 07:21:07.255863299 +0000 UTC m=+3589.845118869" lastFinishedPulling="2025-11-28 07:21:09.671146939 +0000 UTC m=+3592.260402509" observedRunningTime="2025-11-28 07:21:10.316093537 +0000 UTC m=+3592.905349117" watchObservedRunningTime="2025-11-28 07:21:10.32078936 +0000 UTC m=+3592.910044940" Nov 28 07:21:11 crc kubenswrapper[4955]: I1128 07:21:11.358652 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:11 crc kubenswrapper[4955]: I1128 07:21:11.358998 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:11 crc kubenswrapper[4955]: I1128 07:21:11.422754 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:12 crc kubenswrapper[4955]: I1128 07:21:12.382629 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:13 crc kubenswrapper[4955]: I1128 07:21:13.414957 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:14 crc kubenswrapper[4955]: I1128 07:21:14.044274 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:14 crc kubenswrapper[4955]: I1128 07:21:14.102296 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:14 crc kubenswrapper[4955]: I1128 07:21:14.330912 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rjgj" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="registry-server" containerID="cri-o://99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b" gracePeriod=2 Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.301718 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.308950 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content\") pod \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.309029 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84ghw\" (UniqueName: \"kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw\") pod \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.309062 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities\") pod \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\" (UID: \"f7e45ae4-1bc8-483d-a487-dc2195b7285b\") " Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.310113 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities" (OuterVolumeSpecName: "utilities") pod "f7e45ae4-1bc8-483d-a487-dc2195b7285b" (UID: "f7e45ae4-1bc8-483d-a487-dc2195b7285b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.321606 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw" (OuterVolumeSpecName: "kube-api-access-84ghw") pod "f7e45ae4-1bc8-483d-a487-dc2195b7285b" (UID: "f7e45ae4-1bc8-483d-a487-dc2195b7285b"). InnerVolumeSpecName "kube-api-access-84ghw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.347846 4955 generic.go:334] "Generic (PLEG): container finished" podID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerID="99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b" exitCode=0 Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.348147 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerDied","Data":"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b"} Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.348178 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rjgj" event={"ID":"f7e45ae4-1bc8-483d-a487-dc2195b7285b","Type":"ContainerDied","Data":"5dcd8ca2f157140d08d1214644b3f4072ecc969453be9a5f66a98b4b40ba212f"} Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.348198 4955 scope.go:117] "RemoveContainer" containerID="99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.348342 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rjgj" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.365531 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7e45ae4-1bc8-483d-a487-dc2195b7285b" (UID: "f7e45ae4-1bc8-483d-a487-dc2195b7285b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.394415 4955 scope.go:117] "RemoveContainer" containerID="82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.410383 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.410469 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84ghw\" (UniqueName: \"kubernetes.io/projected/f7e45ae4-1bc8-483d-a487-dc2195b7285b-kube-api-access-84ghw\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.410484 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7e45ae4-1bc8-483d-a487-dc2195b7285b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.424429 4955 scope.go:117] "RemoveContainer" containerID="58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.460261 4955 scope.go:117] "RemoveContainer" containerID="99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b" Nov 28 07:21:15 crc kubenswrapper[4955]: E1128 07:21:15.460852 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b\": container with ID starting with 99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b not found: ID does not exist" containerID="99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.460882 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b"} err="failed to get container status \"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b\": rpc error: code = NotFound desc = could not find container \"99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b\": container with ID starting with 99d67b1a364b2c0b3c68373d6bcf53e094442c14cd0246a5c6c7dd2bc25d3f2b not found: ID does not exist" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.460926 4955 scope.go:117] "RemoveContainer" containerID="82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6" Nov 28 07:21:15 crc kubenswrapper[4955]: E1128 07:21:15.461458 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6\": container with ID starting with 82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6 not found: ID does not exist" containerID="82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.461482 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6"} err="failed to get container status \"82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6\": rpc error: code = NotFound desc = could not find container \"82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6\": container with ID starting with 82b70c53685d65f360513b14f5074f9a466a6c5ed4d933385d96cbd724c947e6 not found: ID does not exist" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.461515 4955 scope.go:117] "RemoveContainer" containerID="58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692" Nov 28 07:21:15 crc kubenswrapper[4955]: E1128 07:21:15.461991 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692\": container with ID starting with 58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692 not found: ID does not exist" containerID="58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.462067 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692"} err="failed to get container status \"58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692\": rpc error: code = NotFound desc = could not find container \"58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692\": container with ID starting with 58031ffa5f1e7af31b9b61b673b31df3dcb9eab09a48c4ef8bc8752d9afe1692 not found: ID does not exist" Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.689992 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.702328 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rjgj"] Nov 28 07:21:15 crc kubenswrapper[4955]: I1128 07:21:15.715079 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" path="/var/lib/kubelet/pods/f7e45ae4-1bc8-483d-a487-dc2195b7285b/volumes" Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.014292 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.014362 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.078767 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.427450 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.427871 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kb2r2" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="registry-server" containerID="cri-o://3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048" gracePeriod=2 Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.436410 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:16 crc kubenswrapper[4955]: I1128 07:21:16.927060 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.042145 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities\") pod \"f147322e-8414-4565-8d52-5566735ac784\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.042306 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdfrp\" (UniqueName: \"kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp\") pod \"f147322e-8414-4565-8d52-5566735ac784\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.042356 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content\") pod \"f147322e-8414-4565-8d52-5566735ac784\" (UID: \"f147322e-8414-4565-8d52-5566735ac784\") " Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.042860 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities" (OuterVolumeSpecName: "utilities") pod "f147322e-8414-4565-8d52-5566735ac784" (UID: "f147322e-8414-4565-8d52-5566735ac784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.049048 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp" (OuterVolumeSpecName: "kube-api-access-pdfrp") pod "f147322e-8414-4565-8d52-5566735ac784" (UID: "f147322e-8414-4565-8d52-5566735ac784"). InnerVolumeSpecName "kube-api-access-pdfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.144260 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.144298 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdfrp\" (UniqueName: \"kubernetes.io/projected/f147322e-8414-4565-8d52-5566735ac784-kube-api-access-pdfrp\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.148602 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f147322e-8414-4565-8d52-5566735ac784" (UID: "f147322e-8414-4565-8d52-5566735ac784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.246045 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f147322e-8414-4565-8d52-5566735ac784-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.371797 4955 generic.go:334] "Generic (PLEG): container finished" podID="f147322e-8414-4565-8d52-5566735ac784" containerID="3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048" exitCode=0 Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.372596 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerDied","Data":"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048"} Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.372680 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kb2r2" event={"ID":"f147322e-8414-4565-8d52-5566735ac784","Type":"ContainerDied","Data":"f737c6a032fe97a5864f9bca273f54773015c8023c0659ece36cec86bd534fde"} Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.372706 4955 scope.go:117] "RemoveContainer" containerID="3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.372622 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kb2r2" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.410240 4955 scope.go:117] "RemoveContainer" containerID="a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.410459 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.419240 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kb2r2"] Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.434670 4955 scope.go:117] "RemoveContainer" containerID="a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.492644 4955 scope.go:117] "RemoveContainer" containerID="3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048" Nov 28 07:21:17 crc kubenswrapper[4955]: E1128 07:21:17.494718 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048\": container with ID starting with 3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048 not found: ID does not exist" containerID="3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.494757 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048"} err="failed to get container status \"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048\": rpc error: code = NotFound desc = could not find container \"3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048\": container with ID starting with 3b213754afd8e7c943390f149c442305e61f50dfbf69f315bf9d4d889e766048 not found: ID does not exist" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.494780 4955 scope.go:117] "RemoveContainer" containerID="a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784" Nov 28 07:21:17 crc kubenswrapper[4955]: E1128 07:21:17.495098 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784\": container with ID starting with a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784 not found: ID does not exist" containerID="a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.495118 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784"} err="failed to get container status \"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784\": rpc error: code = NotFound desc = could not find container \"a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784\": container with ID starting with a9134d43143226ebb4819cf0f0454598b7ffc3dc8ea239d80a44f417570e7784 not found: ID does not exist" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.495130 4955 scope.go:117] "RemoveContainer" containerID="a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f" Nov 28 07:21:17 crc kubenswrapper[4955]: E1128 07:21:17.495866 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f\": container with ID starting with a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f not found: ID does not exist" containerID="a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.495919 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f"} err="failed to get container status \"a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f\": rpc error: code = NotFound desc = could not find container \"a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f\": container with ID starting with a3c43e23f2b6714166b7bf6f37bb0d0d6d65cbcafbce365fb2017e3a40f9ea1f not found: ID does not exist" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.715037 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f147322e-8414-4565-8d52-5566735ac784" path="/var/lib/kubelet/pods/f147322e-8414-4565-8d52-5566735ac784/volumes" Nov 28 07:21:17 crc kubenswrapper[4955]: I1128 07:21:17.962304 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rkw98_e9ab5ef6-2183-4170-87c6-5704f80d6073/cert-manager-controller/0.log" Nov 28 07:21:18 crc kubenswrapper[4955]: I1128 07:21:18.106950 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-4tlhd_e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2/cert-manager-cainjector/0.log" Nov 28 07:21:18 crc kubenswrapper[4955]: I1128 07:21:18.142764 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-p27v9_501578cc-adbd-424b-be8d-6bc4ea59655e/cert-manager-webhook/0.log" Nov 28 07:21:18 crc kubenswrapper[4955]: I1128 07:21:18.820501 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:18 crc kubenswrapper[4955]: I1128 07:21:18.820813 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rp6ld" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="registry-server" containerID="cri-o://5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561" gracePeriod=2 Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.335738 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.432717 4955 generic.go:334] "Generic (PLEG): container finished" podID="26a5b36b-fe01-4916-a830-0f1908c21875" containerID="5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561" exitCode=0 Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.432774 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerDied","Data":"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561"} Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.432808 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rp6ld" event={"ID":"26a5b36b-fe01-4916-a830-0f1908c21875","Type":"ContainerDied","Data":"0cfce8c80f60c6303063e3fdd4c3ba739c7f85b13a9d30a26c49e686450db52f"} Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.432805 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rp6ld" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.432870 4955 scope.go:117] "RemoveContainer" containerID="5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.457395 4955 scope.go:117] "RemoveContainer" containerID="4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.479861 4955 scope.go:117] "RemoveContainer" containerID="d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.510710 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrqkr\" (UniqueName: \"kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr\") pod \"26a5b36b-fe01-4916-a830-0f1908c21875\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.510757 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities\") pod \"26a5b36b-fe01-4916-a830-0f1908c21875\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.511166 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content\") pod \"26a5b36b-fe01-4916-a830-0f1908c21875\" (UID: \"26a5b36b-fe01-4916-a830-0f1908c21875\") " Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.512273 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities" (OuterVolumeSpecName: "utilities") pod "26a5b36b-fe01-4916-a830-0f1908c21875" (UID: "26a5b36b-fe01-4916-a830-0f1908c21875"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.517840 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr" (OuterVolumeSpecName: "kube-api-access-zrqkr") pod "26a5b36b-fe01-4916-a830-0f1908c21875" (UID: "26a5b36b-fe01-4916-a830-0f1908c21875"). InnerVolumeSpecName "kube-api-access-zrqkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.529342 4955 scope.go:117] "RemoveContainer" containerID="5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561" Nov 28 07:21:19 crc kubenswrapper[4955]: E1128 07:21:19.529953 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561\": container with ID starting with 5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561 not found: ID does not exist" containerID="5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.530010 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561"} err="failed to get container status \"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561\": rpc error: code = NotFound desc = could not find container \"5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561\": container with ID starting with 5951e4e6036eb07263817afd943b8842f7aeadbaae3c3fe12dd38f6bd73b7561 not found: ID does not exist" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.530043 4955 scope.go:117] "RemoveContainer" containerID="4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810" Nov 28 07:21:19 crc kubenswrapper[4955]: E1128 07:21:19.530473 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810\": container with ID starting with 4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810 not found: ID does not exist" containerID="4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.530553 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810"} err="failed to get container status \"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810\": rpc error: code = NotFound desc = could not find container \"4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810\": container with ID starting with 4ff6ab5d2a173a1ac27e101764b278b4de26279027297c74e42529336d022810 not found: ID does not exist" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.530577 4955 scope.go:117] "RemoveContainer" containerID="d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1" Nov 28 07:21:19 crc kubenswrapper[4955]: E1128 07:21:19.530911 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1\": container with ID starting with d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1 not found: ID does not exist" containerID="d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.530956 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1"} err="failed to get container status \"d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1\": rpc error: code = NotFound desc = could not find container \"d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1\": container with ID starting with d08ef0e4ea170aa0b0b707e466ffd244b0250da18a5854599f58331b4dbf7fb1 not found: ID does not exist" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.535232 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26a5b36b-fe01-4916-a830-0f1908c21875" (UID: "26a5b36b-fe01-4916-a830-0f1908c21875"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.612624 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.612656 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26a5b36b-fe01-4916-a830-0f1908c21875-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.612665 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrqkr\" (UniqueName: \"kubernetes.io/projected/26a5b36b-fe01-4916-a830-0f1908c21875-kube-api-access-zrqkr\") on node \"crc\" DevicePath \"\"" Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.767296 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:19 crc kubenswrapper[4955]: I1128 07:21:19.777149 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rp6ld"] Nov 28 07:21:19 crc kubenswrapper[4955]: E1128 07:21:19.796392 4955 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a5b36b_fe01_4916_a830_0f1908c21875.slice\": RecentStats: unable to find data in memory cache]" Nov 28 07:21:21 crc kubenswrapper[4955]: I1128 07:21:21.704970 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:21:21 crc kubenswrapper[4955]: E1128 07:21:21.705621 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:21:21 crc kubenswrapper[4955]: I1128 07:21:21.720680 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" path="/var/lib/kubelet/pods/26a5b36b-fe01-4916-a830-0f1908c21875/volumes" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.197438 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-6ksf2_bf9f276f-14a8-47e1-9eff-7faf202c0ec3/nmstate-console-plugin/0.log" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.403256 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ztr67_ab1b9bd2-d514-41c2-8315-b035a598caa9/nmstate-handler/0.log" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.435648 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sh662_68fae9ad-af93-49b7-a741-227d048c4ee4/kube-rbac-proxy/0.log" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.455041 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sh662_68fae9ad-af93-49b7-a741-227d048c4ee4/nmstate-metrics/0.log" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.582310 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-8sdmt_c810a1d4-a881-4a83-b9cc-853762f772ee/nmstate-operator/0.log" Nov 28 07:21:30 crc kubenswrapper[4955]: I1128 07:21:30.631247 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-8zt87_8d206e9b-c6d4-4274-acf0-c404fd13eeaf/nmstate-webhook/0.log" Nov 28 07:21:32 crc kubenswrapper[4955]: I1128 07:21:32.705455 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:21:32 crc kubenswrapper[4955]: E1128 07:21:32.706069 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:21:44 crc kubenswrapper[4955]: I1128 07:21:44.704051 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:21:44 crc kubenswrapper[4955]: E1128 07:21:44.706143 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:21:44 crc kubenswrapper[4955]: I1128 07:21:44.823934 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-sjhcg_1f30adb5-d334-4ab0-9acc-8c83ca002efa/kube-rbac-proxy/0.log" Nov 28 07:21:44 crc kubenswrapper[4955]: I1128 07:21:44.911107 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-sjhcg_1f30adb5-d334-4ab0-9acc-8c83ca002efa/controller/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.040764 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.177988 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.206931 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.207054 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.248543 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.413823 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.414778 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.420111 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.449690 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.623459 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.626461 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/controller/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.627854 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.634073 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.786172 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/frr-metrics/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.786626 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/kube-rbac-proxy/0.log" Nov 28 07:21:45 crc kubenswrapper[4955]: I1128 07:21:45.822248 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/kube-rbac-proxy-frr/0.log" Nov 28 07:21:46 crc kubenswrapper[4955]: I1128 07:21:46.020866 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/reloader/0.log" Nov 28 07:21:46 crc kubenswrapper[4955]: I1128 07:21:46.058307 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-hgxx8_73d34f84-8626-4d9a-9f32-e5b041f75636/frr-k8s-webhook-server/0.log" Nov 28 07:21:46 crc kubenswrapper[4955]: I1128 07:21:46.252259 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7ffcc65867-np4gd_897e2a63-d58f-4bf7-b954-7614a0b8011b/manager/0.log" Nov 28 07:21:46 crc kubenswrapper[4955]: I1128 07:21:46.464731 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-674c4d7d9d-zw6p8_55e28775-8755-420c-9c5e-99506d84594e/webhook-server/0.log" Nov 28 07:21:46 crc kubenswrapper[4955]: I1128 07:21:46.494656 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmlb7_ca7c2d77-7f33-4e39-8cc0-4ac415b9d430/kube-rbac-proxy/0.log" Nov 28 07:21:47 crc kubenswrapper[4955]: I1128 07:21:47.090816 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmlb7_ca7c2d77-7f33-4e39-8cc0-4ac415b9d430/speaker/0.log" Nov 28 07:21:47 crc kubenswrapper[4955]: I1128 07:21:47.236227 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/frr/0.log" Nov 28 07:21:55 crc kubenswrapper[4955]: I1128 07:21:55.706216 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:21:55 crc kubenswrapper[4955]: E1128 07:21:55.707237 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:21:59 crc kubenswrapper[4955]: I1128 07:21:59.910066 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.071521 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.084086 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.133122 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.241207 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.255868 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.268141 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/extract/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.581778 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.753570 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.786575 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.823986 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.979166 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:22:00 crc kubenswrapper[4955]: I1128 07:22:00.988535 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.023212 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/extract/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.132357 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.309455 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.381168 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.387958 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.483294 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.530181 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.710592 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.897301 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.902396 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.928718 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:22:01 crc kubenswrapper[4955]: I1128 07:22:01.974653 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/registry-server/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.127625 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.130282 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.406972 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nsv62_91faab58-aa75-49f0-bf54-3de5fccd9ead/marketplace-operator/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.483854 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.712237 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.718891 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/registry-server/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.778953 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.798996 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.982672 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:22:02 crc kubenswrapper[4955]: I1128 07:22:02.983680 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.051988 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/registry-server/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.171856 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.432008 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.437156 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.441983 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.558595 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:22:03 crc kubenswrapper[4955]: I1128 07:22:03.608911 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:22:04 crc kubenswrapper[4955]: I1128 07:22:04.136368 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/registry-server/0.log" Nov 28 07:22:07 crc kubenswrapper[4955]: I1128 07:22:07.710902 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:22:07 crc kubenswrapper[4955]: E1128 07:22:07.711696 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:22:19 crc kubenswrapper[4955]: I1128 07:22:19.704179 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:22:19 crc kubenswrapper[4955]: E1128 07:22:19.704913 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:22:31 crc kubenswrapper[4955]: E1128 07:22:31.807105 4955 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:46436->38.102.83.97:33103: write tcp 38.102.83.97:46436->38.102.83.97:33103: write: broken pipe Nov 28 07:22:32 crc kubenswrapper[4955]: I1128 07:22:32.704944 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:22:32 crc kubenswrapper[4955]: E1128 07:22:32.705254 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:22:43 crc kubenswrapper[4955]: I1128 07:22:43.706364 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:22:43 crc kubenswrapper[4955]: E1128 07:22:43.707431 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:22:54 crc kubenswrapper[4955]: I1128 07:22:54.705014 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:22:54 crc kubenswrapper[4955]: E1128 07:22:54.706142 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:23:07 crc kubenswrapper[4955]: I1128 07:23:07.712555 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:23:07 crc kubenswrapper[4955]: E1128 07:23:07.713426 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:23:21 crc kubenswrapper[4955]: I1128 07:23:21.704541 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:23:21 crc kubenswrapper[4955]: E1128 07:23:21.706778 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:23:33 crc kubenswrapper[4955]: I1128 07:23:33.708020 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:23:33 crc kubenswrapper[4955]: E1128 07:23:33.711452 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:23:40 crc kubenswrapper[4955]: I1128 07:23:40.931494 4955 generic.go:334] "Generic (PLEG): container finished" podID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerID="699973d3ccbb139edb26e2d0a842e77f68c1ad289e50784138c9165c1af88eed" exitCode=0 Nov 28 07:23:40 crc kubenswrapper[4955]: I1128 07:23:40.931641 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bd467/must-gather-94qtx" event={"ID":"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9","Type":"ContainerDied","Data":"699973d3ccbb139edb26e2d0a842e77f68c1ad289e50784138c9165c1af88eed"} Nov 28 07:23:40 crc kubenswrapper[4955]: I1128 07:23:40.933786 4955 scope.go:117] "RemoveContainer" containerID="699973d3ccbb139edb26e2d0a842e77f68c1ad289e50784138c9165c1af88eed" Nov 28 07:23:41 crc kubenswrapper[4955]: I1128 07:23:41.910373 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd467_must-gather-94qtx_9d2577be-66cf-4b47-83ae-f9ab7b99ffd9/gather/0.log" Nov 28 07:23:45 crc kubenswrapper[4955]: I1128 07:23:45.705564 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:23:45 crc kubenswrapper[4955]: E1128 07:23:45.707008 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:23:49 crc kubenswrapper[4955]: I1128 07:23:49.931767 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bd467/must-gather-94qtx"] Nov 28 07:23:49 crc kubenswrapper[4955]: I1128 07:23:49.932823 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bd467/must-gather-94qtx" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="copy" containerID="cri-o://54f6c29405df8f641d53166d0252d2ef4aa55094680c5681911e26c7f6592140" gracePeriod=2 Nov 28 07:23:49 crc kubenswrapper[4955]: I1128 07:23:49.947291 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bd467/must-gather-94qtx"] Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.058928 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd467_must-gather-94qtx_9d2577be-66cf-4b47-83ae-f9ab7b99ffd9/copy/0.log" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.059679 4955 generic.go:334] "Generic (PLEG): container finished" podID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerID="54f6c29405df8f641d53166d0252d2ef4aa55094680c5681911e26c7f6592140" exitCode=143 Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.363970 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd467_must-gather-94qtx_9d2577be-66cf-4b47-83ae-f9ab7b99ffd9/copy/0.log" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.364647 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.441071 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h57th\" (UniqueName: \"kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th\") pod \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.441153 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output\") pod \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\" (UID: \"9d2577be-66cf-4b47-83ae-f9ab7b99ffd9\") " Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.447443 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th" (OuterVolumeSpecName: "kube-api-access-h57th") pod "9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" (UID: "9d2577be-66cf-4b47-83ae-f9ab7b99ffd9"). InnerVolumeSpecName "kube-api-access-h57th". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.543120 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h57th\" (UniqueName: \"kubernetes.io/projected/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-kube-api-access-h57th\") on node \"crc\" DevicePath \"\"" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.582194 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" (UID: "9d2577be-66cf-4b47-83ae-f9ab7b99ffd9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:23:50 crc kubenswrapper[4955]: I1128 07:23:50.645026 4955 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 07:23:51 crc kubenswrapper[4955]: I1128 07:23:51.079367 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bd467_must-gather-94qtx_9d2577be-66cf-4b47-83ae-f9ab7b99ffd9/copy/0.log" Nov 28 07:23:51 crc kubenswrapper[4955]: I1128 07:23:51.081094 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bd467/must-gather-94qtx" Nov 28 07:23:51 crc kubenswrapper[4955]: I1128 07:23:51.081126 4955 scope.go:117] "RemoveContainer" containerID="54f6c29405df8f641d53166d0252d2ef4aa55094680c5681911e26c7f6592140" Nov 28 07:23:51 crc kubenswrapper[4955]: I1128 07:23:51.115192 4955 scope.go:117] "RemoveContainer" containerID="699973d3ccbb139edb26e2d0a842e77f68c1ad289e50784138c9165c1af88eed" Nov 28 07:23:51 crc kubenswrapper[4955]: I1128 07:23:51.732730 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" path="/var/lib/kubelet/pods/9d2577be-66cf-4b47-83ae-f9ab7b99ffd9/volumes" Nov 28 07:23:56 crc kubenswrapper[4955]: I1128 07:23:56.704333 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:23:56 crc kubenswrapper[4955]: E1128 07:23:56.705306 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:24:08 crc kubenswrapper[4955]: I1128 07:24:08.704584 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:24:08 crc kubenswrapper[4955]: E1128 07:24:08.705482 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:24:20 crc kubenswrapper[4955]: I1128 07:24:20.705600 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:24:20 crc kubenswrapper[4955]: E1128 07:24:20.706909 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:24:33 crc kubenswrapper[4955]: I1128 07:24:33.704848 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:24:33 crc kubenswrapper[4955]: E1128 07:24:33.705541 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:24:46 crc kubenswrapper[4955]: I1128 07:24:46.704340 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:24:46 crc kubenswrapper[4955]: E1128 07:24:46.705149 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:24:57 crc kubenswrapper[4955]: I1128 07:24:57.718708 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:24:57 crc kubenswrapper[4955]: E1128 07:24:57.719539 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:25:11 crc kubenswrapper[4955]: I1128 07:25:11.705800 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:25:11 crc kubenswrapper[4955]: E1128 07:25:11.707243 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:25:23 crc kubenswrapper[4955]: I1128 07:25:23.704916 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:25:23 crc kubenswrapper[4955]: E1128 07:25:23.705764 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:25:28 crc kubenswrapper[4955]: I1128 07:25:28.651835 4955 scope.go:117] "RemoveContainer" containerID="2c894d96311810a718b5113d743a3b53ad8701ef2f33831994d10a5336159b2a" Nov 28 07:25:37 crc kubenswrapper[4955]: I1128 07:25:37.712314 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:25:37 crc kubenswrapper[4955]: E1128 07:25:37.714883 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:25:52 crc kubenswrapper[4955]: I1128 07:25:52.705165 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:25:52 crc kubenswrapper[4955]: E1128 07:25:52.706231 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:26:05 crc kubenswrapper[4955]: I1128 07:26:05.704812 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:26:06 crc kubenswrapper[4955]: I1128 07:26:06.574876 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30"} Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.656404 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x6zl9/must-gather-rqb4p"] Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657484 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657499 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657539 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657546 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657562 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="gather" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657570 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="gather" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657589 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657597 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657617 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657625 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657637 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657644 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657655 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657662 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657684 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657691 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657710 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="copy" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657718 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="copy" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657734 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657741 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="extract-content" Nov 28 07:26:28 crc kubenswrapper[4955]: E1128 07:26:28.657756 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657763 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="extract-utilities" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.657979 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="gather" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.658006 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f147322e-8414-4565-8d52-5566735ac784" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.658022 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a5b36b-fe01-4916-a830-0f1908c21875" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.658032 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2577be-66cf-4b47-83ae-f9ab7b99ffd9" containerName="copy" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.658048 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e45ae4-1bc8-483d-a487-dc2195b7285b" containerName="registry-server" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.659305 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.660936 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-x6zl9"/"default-dockercfg-mm5sh" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.664959 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x6zl9/must-gather-rqb4p"] Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.666249 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x6zl9"/"openshift-service-ca.crt" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.666265 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-x6zl9"/"kube-root-ca.crt" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.676834 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.677005 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjtcj\" (UniqueName: \"kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.722437 4955 scope.go:117] "RemoveContainer" containerID="6e437d48c4adc71926173f1fb7c760121a0f14e9041eb6caff88d1f50a7d9015" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.779623 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.779714 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjtcj\" (UniqueName: \"kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.780178 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.797778 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjtcj\" (UniqueName: \"kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj\") pod \"must-gather-rqb4p\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:28 crc kubenswrapper[4955]: I1128 07:26:28.976692 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:26:29 crc kubenswrapper[4955]: I1128 07:26:29.469084 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-x6zl9/must-gather-rqb4p"] Nov 28 07:26:29 crc kubenswrapper[4955]: I1128 07:26:29.839616 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" event={"ID":"2c0d2fd4-ec67-4a89-8549-3db777959f88","Type":"ContainerStarted","Data":"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7"} Nov 28 07:26:29 crc kubenswrapper[4955]: I1128 07:26:29.839962 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" event={"ID":"2c0d2fd4-ec67-4a89-8549-3db777959f88","Type":"ContainerStarted","Data":"065ddb3249f304192cef28e057564fa9bee5f76d509aa4eaaf5c031819288e92"} Nov 28 07:26:30 crc kubenswrapper[4955]: I1128 07:26:30.861378 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" event={"ID":"2c0d2fd4-ec67-4a89-8549-3db777959f88","Type":"ContainerStarted","Data":"078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10"} Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.320192 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" podStartSLOduration=6.320176383 podStartE2EDuration="6.320176383s" podCreationTimestamp="2025-11-28 07:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:26:30.887839317 +0000 UTC m=+3913.477094897" watchObservedRunningTime="2025-11-28 07:26:34.320176383 +0000 UTC m=+3916.909431953" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.321357 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-87tfl"] Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.322439 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.406260 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-628sl\" (UniqueName: \"kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.406569 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.508985 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-628sl\" (UniqueName: \"kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.509095 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.509264 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.530058 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-628sl\" (UniqueName: \"kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl\") pod \"crc-debug-87tfl\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.639730 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:26:34 crc kubenswrapper[4955]: W1128 07:26:34.682514 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod956d6cd9_3553_4370_8109_d4837d486e4d.slice/crio-adf6d918b6052b24185c0906982298618c9c45ab0abbacb24f7ccbcd6648ca4b WatchSource:0}: Error finding container adf6d918b6052b24185c0906982298618c9c45ab0abbacb24f7ccbcd6648ca4b: Status 404 returned error can't find the container with id adf6d918b6052b24185c0906982298618c9c45ab0abbacb24f7ccbcd6648ca4b Nov 28 07:26:34 crc kubenswrapper[4955]: I1128 07:26:34.896241 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" event={"ID":"956d6cd9-3553-4370-8109-d4837d486e4d","Type":"ContainerStarted","Data":"adf6d918b6052b24185c0906982298618c9c45ab0abbacb24f7ccbcd6648ca4b"} Nov 28 07:26:35 crc kubenswrapper[4955]: I1128 07:26:35.906498 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" event={"ID":"956d6cd9-3553-4370-8109-d4837d486e4d","Type":"ContainerStarted","Data":"8ee057a4aff182c1d856e9b3d209d19173f7ea47686e99a77f7008a2faa9c0a3"} Nov 28 07:26:35 crc kubenswrapper[4955]: I1128 07:26:35.925216 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" podStartSLOduration=1.925195548 podStartE2EDuration="1.925195548s" podCreationTimestamp="2025-11-28 07:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 07:26:35.921833373 +0000 UTC m=+3918.511088953" watchObservedRunningTime="2025-11-28 07:26:35.925195548 +0000 UTC m=+3918.514451118" Nov 28 07:27:07 crc kubenswrapper[4955]: I1128 07:27:07.209124 4955 generic.go:334] "Generic (PLEG): container finished" podID="956d6cd9-3553-4370-8109-d4837d486e4d" containerID="8ee057a4aff182c1d856e9b3d209d19173f7ea47686e99a77f7008a2faa9c0a3" exitCode=0 Nov 28 07:27:07 crc kubenswrapper[4955]: I1128 07:27:07.209217 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" event={"ID":"956d6cd9-3553-4370-8109-d4837d486e4d","Type":"ContainerDied","Data":"8ee057a4aff182c1d856e9b3d209d19173f7ea47686e99a77f7008a2faa9c0a3"} Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.343398 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.373371 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-87tfl"] Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.381353 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-87tfl"] Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.459119 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host\") pod \"956d6cd9-3553-4370-8109-d4837d486e4d\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.459245 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host" (OuterVolumeSpecName: "host") pod "956d6cd9-3553-4370-8109-d4837d486e4d" (UID: "956d6cd9-3553-4370-8109-d4837d486e4d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.459363 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-628sl\" (UniqueName: \"kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl\") pod \"956d6cd9-3553-4370-8109-d4837d486e4d\" (UID: \"956d6cd9-3553-4370-8109-d4837d486e4d\") " Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.459919 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/956d6cd9-3553-4370-8109-d4837d486e4d-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.464834 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl" (OuterVolumeSpecName: "kube-api-access-628sl") pod "956d6cd9-3553-4370-8109-d4837d486e4d" (UID: "956d6cd9-3553-4370-8109-d4837d486e4d"). InnerVolumeSpecName "kube-api-access-628sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:27:08 crc kubenswrapper[4955]: I1128 07:27:08.561604 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-628sl\" (UniqueName: \"kubernetes.io/projected/956d6cd9-3553-4370-8109-d4837d486e4d-kube-api-access-628sl\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.231194 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adf6d918b6052b24185c0906982298618c9c45ab0abbacb24f7ccbcd6648ca4b" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.231269 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-87tfl" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.597664 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-476cw"] Nov 28 07:27:09 crc kubenswrapper[4955]: E1128 07:27:09.598110 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="956d6cd9-3553-4370-8109-d4837d486e4d" containerName="container-00" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.598123 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="956d6cd9-3553-4370-8109-d4837d486e4d" containerName="container-00" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.598312 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="956d6cd9-3553-4370-8109-d4837d486e4d" containerName="container-00" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.598988 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.678839 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstvh\" (UniqueName: \"kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.678946 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.715544 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="956d6cd9-3553-4370-8109-d4837d486e4d" path="/var/lib/kubelet/pods/956d6cd9-3553-4370-8109-d4837d486e4d/volumes" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.781000 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fstvh\" (UniqueName: \"kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.781133 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.783442 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.813277 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fstvh\" (UniqueName: \"kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh\") pod \"crc-debug-476cw\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:09 crc kubenswrapper[4955]: I1128 07:27:09.918200 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:10 crc kubenswrapper[4955]: I1128 07:27:10.245865 4955 generic.go:334] "Generic (PLEG): container finished" podID="08060d6c-8318-4dfa-b687-f2287d758af8" containerID="a641dde34f172ab04d082d61a21c22a1c7107321d73be2edeed1a0e7c94752e5" exitCode=0 Nov 28 07:27:10 crc kubenswrapper[4955]: I1128 07:27:10.246208 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-476cw" event={"ID":"08060d6c-8318-4dfa-b687-f2287d758af8","Type":"ContainerDied","Data":"a641dde34f172ab04d082d61a21c22a1c7107321d73be2edeed1a0e7c94752e5"} Nov 28 07:27:10 crc kubenswrapper[4955]: I1128 07:27:10.246237 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-476cw" event={"ID":"08060d6c-8318-4dfa-b687-f2287d758af8","Type":"ContainerStarted","Data":"86bc18e1139abd18708094f1b6b81c6cf75948b65be81fbcb43cd759b293b37a"} Nov 28 07:27:10 crc kubenswrapper[4955]: I1128 07:27:10.602278 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-476cw"] Nov 28 07:27:10 crc kubenswrapper[4955]: I1128 07:27:10.611030 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-476cw"] Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.348100 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.411452 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fstvh\" (UniqueName: \"kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh\") pod \"08060d6c-8318-4dfa-b687-f2287d758af8\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.411643 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host\") pod \"08060d6c-8318-4dfa-b687-f2287d758af8\" (UID: \"08060d6c-8318-4dfa-b687-f2287d758af8\") " Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.411970 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host" (OuterVolumeSpecName: "host") pod "08060d6c-8318-4dfa-b687-f2287d758af8" (UID: "08060d6c-8318-4dfa-b687-f2287d758af8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.413099 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/08060d6c-8318-4dfa-b687-f2287d758af8-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.418522 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh" (OuterVolumeSpecName: "kube-api-access-fstvh") pod "08060d6c-8318-4dfa-b687-f2287d758af8" (UID: "08060d6c-8318-4dfa-b687-f2287d758af8"). InnerVolumeSpecName "kube-api-access-fstvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.514951 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fstvh\" (UniqueName: \"kubernetes.io/projected/08060d6c-8318-4dfa-b687-f2287d758af8-kube-api-access-fstvh\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.726756 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08060d6c-8318-4dfa-b687-f2287d758af8" path="/var/lib/kubelet/pods/08060d6c-8318-4dfa-b687-f2287d758af8/volumes" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.851584 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-lvtl5"] Nov 28 07:27:11 crc kubenswrapper[4955]: E1128 07:27:11.852524 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08060d6c-8318-4dfa-b687-f2287d758af8" containerName="container-00" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.852549 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="08060d6c-8318-4dfa-b687-f2287d758af8" containerName="container-00" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.852838 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="08060d6c-8318-4dfa-b687-f2287d758af8" containerName="container-00" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.853617 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.924361 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slrkt\" (UniqueName: \"kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:11 crc kubenswrapper[4955]: I1128 07:27:11.925116 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.026642 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.026822 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slrkt\" (UniqueName: \"kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.026820 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.052083 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slrkt\" (UniqueName: \"kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt\") pod \"crc-debug-lvtl5\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.177276 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:12 crc kubenswrapper[4955]: W1128 07:27:12.223666 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c888b2c_b7b8_41f7_964c_0e64050d5299.slice/crio-8292d6c998fcfbc180453c32f94cfeaadb0adc5167d93c85fc290710f6d8d258 WatchSource:0}: Error finding container 8292d6c998fcfbc180453c32f94cfeaadb0adc5167d93c85fc290710f6d8d258: Status 404 returned error can't find the container with id 8292d6c998fcfbc180453c32f94cfeaadb0adc5167d93c85fc290710f6d8d258 Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.272387 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" event={"ID":"5c888b2c-b7b8-41f7-964c-0e64050d5299","Type":"ContainerStarted","Data":"8292d6c998fcfbc180453c32f94cfeaadb0adc5167d93c85fc290710f6d8d258"} Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.275483 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-476cw" Nov 28 07:27:12 crc kubenswrapper[4955]: I1128 07:27:12.276486 4955 scope.go:117] "RemoveContainer" containerID="a641dde34f172ab04d082d61a21c22a1c7107321d73be2edeed1a0e7c94752e5" Nov 28 07:27:13 crc kubenswrapper[4955]: I1128 07:27:13.289570 4955 generic.go:334] "Generic (PLEG): container finished" podID="5c888b2c-b7b8-41f7-964c-0e64050d5299" containerID="10654914253a9717499b23284493abf9104a19c1e34ce4834675a5cb8080fdbb" exitCode=0 Nov 28 07:27:13 crc kubenswrapper[4955]: I1128 07:27:13.289751 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" event={"ID":"5c888b2c-b7b8-41f7-964c-0e64050d5299","Type":"ContainerDied","Data":"10654914253a9717499b23284493abf9104a19c1e34ce4834675a5cb8080fdbb"} Nov 28 07:27:13 crc kubenswrapper[4955]: I1128 07:27:13.336617 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-lvtl5"] Nov 28 07:27:13 crc kubenswrapper[4955]: I1128 07:27:13.349874 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x6zl9/crc-debug-lvtl5"] Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.329991 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.346319 4955 scope.go:117] "RemoveContainer" containerID="10654914253a9717499b23284493abf9104a19c1e34ce4834675a5cb8080fdbb" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.346458 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/crc-debug-lvtl5" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.492221 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slrkt\" (UniqueName: \"kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt\") pod \"5c888b2c-b7b8-41f7-964c-0e64050d5299\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.492316 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host\") pod \"5c888b2c-b7b8-41f7-964c-0e64050d5299\" (UID: \"5c888b2c-b7b8-41f7-964c-0e64050d5299\") " Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.492749 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host" (OuterVolumeSpecName: "host") pod "5c888b2c-b7b8-41f7-964c-0e64050d5299" (UID: "5c888b2c-b7b8-41f7-964c-0e64050d5299"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.500702 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt" (OuterVolumeSpecName: "kube-api-access-slrkt") pod "5c888b2c-b7b8-41f7-964c-0e64050d5299" (UID: "5c888b2c-b7b8-41f7-964c-0e64050d5299"). InnerVolumeSpecName "kube-api-access-slrkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.594533 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slrkt\" (UniqueName: \"kubernetes.io/projected/5c888b2c-b7b8-41f7-964c-0e64050d5299-kube-api-access-slrkt\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.594571 4955 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5c888b2c-b7b8-41f7-964c-0e64050d5299-host\") on node \"crc\" DevicePath \"\"" Nov 28 07:27:15 crc kubenswrapper[4955]: I1128 07:27:15.719357 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c888b2c-b7b8-41f7-964c-0e64050d5299" path="/var/lib/kubelet/pods/5c888b2c-b7b8-41f7-964c-0e64050d5299/volumes" Nov 28 07:27:39 crc kubenswrapper[4955]: I1128 07:27:39.807422 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b4956ccd4-8qnhx_eb37217e-f20a-4e50-b616-b0b1231fbd89/barbican-api/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.009648 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7b4956ccd4-8qnhx_eb37217e-f20a-4e50-b616-b0b1231fbd89/barbican-api-log/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.036110 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6985c74bc8-qgvjf_f30cb01b-f625-4031-98a0-272f85d43a81/barbican-keystone-listener/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.103869 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6985c74bc8-qgvjf_f30cb01b-f625-4031-98a0-272f85d43a81/barbican-keystone-listener-log/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.208237 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5dfcd47cfc-s75nx_24535783-21c6-4550-965e-7fd84038058b/barbican-worker-log/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.211020 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5dfcd47cfc-s75nx_24535783-21c6-4550-965e-7fd84038058b/barbican-worker/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.389586 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-f4gzs_be0906bb-475c-4229-9a9f-9a5361e6172e/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.489666 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/ceilometer-central-agent/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.512730 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/ceilometer-notification-agent/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.580922 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/proxy-httpd/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.629814 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cee6f72e-fd30-4482-881a-4afb4c003099/sg-core/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.745977 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdcb880-5f80-4347-81ef-f9f5ff9a097b/cinder-api/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.786803 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_9fdcb880-5f80-4347-81ef-f9f5ff9a097b/cinder-api-log/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.921863 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7f74ee90-8d6d-42f1-8aa7-61d06d62f07c/cinder-scheduler/0.log" Nov 28 07:27:40 crc kubenswrapper[4955]: I1128 07:27:40.944960 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7f74ee90-8d6d-42f1-8aa7-61d06d62f07c/probe/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.101960 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-b567k_3efdabfd-7ad3-4586-8398-97512113e085/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.145143 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-vrmg9_40082d1e-0844-4d3d-9c68-25fb8eb44351/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.313645 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/init/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.429267 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/init/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.481284 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-jb4z6_5eb6e022-3f20-498e-ac8d-8fed796ff122/dnsmasq-dns/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.499993 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-wvcdq_40e141ea-e10b-4e62-a075-da26dee75286/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.664933 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_02ab7a37-574b-4e32-bc8a-c5dd638a6a45/glance-httpd/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.683790 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_02ab7a37-574b-4e32-bc8a-c5dd638a6a45/glance-log/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.901873 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_104ece36-bc05-45c5-984c-55d61b6ebe8b/glance-httpd/0.log" Nov 28 07:27:41 crc kubenswrapper[4955]: I1128 07:27:41.964545 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_104ece36-bc05-45c5-984c-55d61b6ebe8b/glance-log/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.007203 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56f45c5b6-nqg9b_0540bb1f-c904-4b07-acda-ce47d0bdfa7c/horizon/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.177279 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5q92s_93204339-2c92-4d5d-a519-402ee3a45e79/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.401352 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-2jx76_86dd2a3d-7a8d-4695-98cb-bb3b8c55ec3d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.421455 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-56f45c5b6-nqg9b_0540bb1f-c904-4b07-acda-ce47d0bdfa7c/horizon-log/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.642102 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29405221-4mmm4_0b8ffa87-03b1-4df9-a491-15db50f8a75e/keystone-cron/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.666639 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-766d6648f9-vfvxt_374d1f5d-9bd1-4362-a245-97f658097965/keystone-api/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.939071 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-cbdvc_845c1878-1788-4409-bbd8-a76a2f3eed71/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:42 crc kubenswrapper[4955]: I1128 07:27:42.953272 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ab74c890-3754-4fdb-84ab-0884ae7ca237/kube-state-metrics/0.log" Nov 28 07:27:43 crc kubenswrapper[4955]: I1128 07:27:43.256292 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6478fb8469-kzjkp_8716e967-61aa-43b9-9d68-cb6699c5c673/neutron-api/0.log" Nov 28 07:27:43 crc kubenswrapper[4955]: I1128 07:27:43.335869 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6478fb8469-kzjkp_8716e967-61aa-43b9-9d68-cb6699c5c673/neutron-httpd/0.log" Nov 28 07:27:43 crc kubenswrapper[4955]: I1128 07:27:43.495776 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kwpfc_17b265c1-83dd-4a5c-9e5b-92923c919d1d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.044422 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_03bbb794-571b-4980-8445-7766a14bb5c9/nova-api-log/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.058100 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1effeb5c-c81a-43ff-8624-9c077f2484a3/nova-cell0-conductor-conductor/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.401747 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d076178-18c0-42af-b40b-3cc8f1cb77cb/nova-cell1-conductor-conductor/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.487369 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_03bbb794-571b-4980-8445-7766a14bb5c9/nova-api-api/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.495095 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fe4d7165-1010-42a5-a707-257169437be1/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.641491 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-db78m_0175b4ba-a3eb-4d4f-bd08-e1ee1607fe39/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:44 crc kubenswrapper[4955]: I1128 07:27:44.851104 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_932b3fc6-dd61-4bcd-9836-f04de5a42ee7/nova-metadata-log/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.136667 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/mysql-bootstrap/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.164726 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_61ac9549-3394-4586-ae7d-afede69f862c/nova-scheduler-scheduler/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.252903 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/mysql-bootstrap/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.344986 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_5ce3cc8f-9d19-49fa-83a9-d71cf669d26c/galera/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.508013 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/mysql-bootstrap/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.706578 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/mysql-bootstrap/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.725414 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_da36284b-b2a1-4008-a19c-3916e99c0bec/galera/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.928147 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqt82_3543aa49-473d-4e57-a9eb-edbca5c7f58d/openstack-network-exporter/0.log" Nov 28 07:27:45 crc kubenswrapper[4955]: I1128 07:27:45.937368 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_98ccf66c-347b-4fbe-9b2e-974e15e3eea7/openstackclient/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.124613 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server-init/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.157489 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_932b3fc6-dd61-4bcd-9836-f04de5a42ee7/nova-metadata-metadata/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.350385 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server-init/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.383043 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovs-vswitchd/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.420383 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-nxwbb_7c1a8276-d93e-498f-94a2-e698b071f1ee/ovsdb-server/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.620370 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-p2bvh_3963971f-dccf-42a8-9889-b5e122ee6809/ovn-controller/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.652880 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dmn6w_9e473921-1378-4318-89ef-7f2f39c41aed/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.797422 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88f30640-ea6e-4479-b4ab-4e21f96f7ddb/openstack-network-exporter/0.log" Nov 28 07:27:46 crc kubenswrapper[4955]: I1128 07:27:46.836211 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88f30640-ea6e-4479-b4ab-4e21f96f7ddb/ovn-northd/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.015050 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_184e43b4-9c7c-4df1-b1a7-503ef8139459/openstack-network-exporter/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.115002 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_184e43b4-9c7c-4df1-b1a7-503ef8139459/ovsdbserver-nb/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.203543 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1ba1430-28cb-4bba-936d-00e8988eab09/ovsdbserver-sb/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.203965 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f1ba1430-28cb-4bba-936d-00e8988eab09/openstack-network-exporter/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.504799 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d446689d4-fvjm6_0a8c9e11-5611-4739-9a2c-24ad016682c0/placement-api/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.541395 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/setup-container/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.548183 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6d446689d4-fvjm6_0a8c9e11-5611-4739-9a2c-24ad016682c0/placement-log/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.778468 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/setup-container/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.854150 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c326a903-f8eb-4e06-a44b-ae3bca93e0b6/rabbitmq/0.log" Nov 28 07:27:47 crc kubenswrapper[4955]: I1128 07:27:47.856305 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/setup-container/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.088446 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/setup-container/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.133442 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8677f8b0-5621-470c-826f-1c2f9725c6d7/rabbitmq/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.158996 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rf4hc_6477c9e8-dda5-46fe-8b80-3ccc99f2b00d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.411177 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-kk4kh_916114e1-c9f4-45af-acbd-14fa82b380ed/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.493708 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xm6d9_54f3d846-b19a-415e-93bb-9f4c1a3e02dc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.592245 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pzmhq_41f03c76-5015-4f05-bf3d-0c21610c1a50/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.700057 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-hqrs6_6788af76-b07b-492d-b4bb-dceb2d35b853/ssh-known-hosts-edpm-deployment/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.855022 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d59886dc-t4pgs_91783657-7b6c-4053-9c14-aed825d54a73/proxy-server/0.log" Nov 28 07:27:48 crc kubenswrapper[4955]: I1128 07:27:48.982902 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5d59886dc-t4pgs_91783657-7b6c-4053-9c14-aed825d54a73/proxy-httpd/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.053557 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-hf68n_f8cdb34d-d310-43c4-bdcd-83e12752f6ea/swift-ring-rebalance/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.226864 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-reaper/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.252536 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-auditor/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.315290 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-replicator/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.334347 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/account-server/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.420742 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-auditor/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.475335 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-replicator/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.504137 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-server/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.592619 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/container-updater/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.632410 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-auditor/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.716629 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-expirer/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.743439 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-replicator/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.830734 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-server/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.852206 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/object-updater/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.909168 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/swift-recon-cron/0.log" Nov 28 07:27:49 crc kubenswrapper[4955]: I1128 07:27:49.956871 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0b38ef12-050e-4f3e-9b92-79ad3baba7d7/rsync/0.log" Nov 28 07:27:50 crc kubenswrapper[4955]: I1128 07:27:50.263581 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-nf7hb_d7286bef-2382-464e-95fa-61654cead41d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:27:50 crc kubenswrapper[4955]: I1128 07:27:50.329789 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_81ccd45f-3f32-4e86-8874-0468a6fc2471/tempest-tests-tempest-tests-runner/0.log" Nov 28 07:27:50 crc kubenswrapper[4955]: I1128 07:27:50.471881 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_11625f83-961b-4c79-aa1a-d8d9fe1c6bf1/test-operator-logs-container/0.log" Nov 28 07:27:50 crc kubenswrapper[4955]: I1128 07:27:50.602075 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jck8m_a95ee68c-d5b2-490f-a4e4-33bb8bb56536/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 07:28:00 crc kubenswrapper[4955]: I1128 07:28:00.064377 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b8f1a214-823b-4a75-ada2-b5973ad7abd6/memcached/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.477421 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.683599 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.723283 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.738763 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.869466 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/pull/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.890785 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/util/0.log" Nov 28 07:28:18 crc kubenswrapper[4955]: I1128 07:28:18.937554 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8627b9ced6aa9d7f83c8fbef4befbec88eaffe1f4730df08242396b43fjh4pl_af247d46-e077-45be-af71-143bfc2cd71c/extract/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.050430 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-5kt75_3e51ea77-cbc1-4ebd-9247-335d93211353/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.146905 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-5kt75_3e51ea77-cbc1-4ebd-9247-335d93211353/manager/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.208789 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-p5n27_813a8c4e-06bd-467e-9b80-0e3e88fb361a/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.277653 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-p5n27_813a8c4e-06bd-467e-9b80-0e3e88fb361a/manager/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.360012 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-8cmf7_ef549437-6bef-428a-991f-b38cc613ec1e/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.414075 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-8cmf7_ef549437-6bef-428a-991f-b38cc613ec1e/manager/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.552798 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-jtxkh_8bcb6097-d2d8-4190-afbd-644daa5ce7b6/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.662870 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-jtxkh_8bcb6097-d2d8-4190-afbd-644daa5ce7b6/manager/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.700847 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-lgnvq_d2c0d9ce-4c16-451d-948b-75ae7bbca487/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.775451 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-lgnvq_d2c0d9ce-4c16-451d-948b-75ae7bbca487/manager/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.859998 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-c7pkv_84c6c0d5-d427-471a-8a54-9d3fc28264bc/kube-rbac-proxy/0.log" Nov 28 07:28:19 crc kubenswrapper[4955]: I1128 07:28:19.888945 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-c7pkv_84c6c0d5-d427-471a-8a54-9d3fc28264bc/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.059929 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rtv6s_3f5477af-57e8-4a83-95ce-9fea4d62e797/kube-rbac-proxy/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.195974 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-rtv6s_3f5477af-57e8-4a83-95ce-9fea4d62e797/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.210429 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-mdrg9_d8ca8a28-b011-4a61-b37d-5f84543d63bb/kube-rbac-proxy/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.280624 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-mdrg9_d8ca8a28-b011-4a61-b37d-5f84543d63bb/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.358854 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-4pmmp_245721bd-2bc5-4f42-ac45-5ae0b07cd77e/kube-rbac-proxy/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.463072 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-4pmmp_245721bd-2bc5-4f42-ac45-5ae0b07cd77e/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.546361 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-mwk52_042d3c47-fa72-4e2f-a127-2885c81ec7e4/kube-rbac-proxy/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.627317 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-mwk52_042d3c47-fa72-4e2f-a127-2885c81ec7e4/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.716958 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-kdz89_d2d018b1-e591-4109-9b83-82bc60b2cb59/kube-rbac-proxy/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.754647 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-kdz89_d2d018b1-e591-4109-9b83-82bc60b2cb59/manager/0.log" Nov 28 07:28:20 crc kubenswrapper[4955]: I1128 07:28:20.941992 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-fz6rt_4871d492-a015-4a2b-9f6a-62e15bfdb825/kube-rbac-proxy/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.006443 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-fz6rt_4871d492-a015-4a2b-9f6a-62e15bfdb825/manager/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.068284 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hwmcq_801bb8d6-c107-48ad-b985-62e932b38992/kube-rbac-proxy/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.199404 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-f77r5_a9444b3d-85c5-4f44-953d-65a4dd2f30f2/kube-rbac-proxy/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.219470 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-hwmcq_801bb8d6-c107-48ad-b985-62e932b38992/manager/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.272785 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-f77r5_a9444b3d-85c5-4f44-953d-65a4dd2f30f2/manager/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.402460 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6_a1c5873f-0d08-4f51-aa91-822fc86a33e3/manager/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.445425 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b6l4w6_a1c5873f-0d08-4f51-aa91-822fc86a33e3/kube-rbac-proxy/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.843324 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6zdst_739948d5-645f-4c91-9372-588a7128b7b2/registry-server/0.log" Nov 28 07:28:21 crc kubenswrapper[4955]: I1128 07:28:21.890414 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8f67c45-t6djq_a76c5381-15dd-479f-af8a-78a8c2ec2bad/operator/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.059391 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-qtcjk_f0d92863-0f89-415d-b4a3-24e09fb4ec02/kube-rbac-proxy/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.128076 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-qtcjk_f0d92863-0f89-415d-b4a3-24e09fb4ec02/manager/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.415346 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4csc4_5d9f654a-a223-4b91-93fd-301807c6f29a/kube-rbac-proxy/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.496648 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-4csc4_5d9f654a-a223-4b91-93fd-301807c6f29a/manager/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.586752 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-n7jmb_5daef806-96c3-439c-85f9-f1ef27a8be0d/operator/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.686824 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bd7f7485b-zbpwx_84a07034-e21d-4e5b-a6ef-ba76d30b662a/manager/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.720063 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-q4wgd_cfa54a97-6210-4566-bf61-c0c7720ec0ec/kube-rbac-proxy/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.800577 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-q4wgd_cfa54a97-6210-4566-bf61-c0c7720ec0ec/manager/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.863848 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-rvkg2_d11a80a8-9bba-491e-aa38-e93e59c3343e/kube-rbac-proxy/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.939951 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-rvkg2_d11a80a8-9bba-491e-aa38-e93e59c3343e/manager/0.log" Nov 28 07:28:22 crc kubenswrapper[4955]: I1128 07:28:22.974693 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-4vhhh_e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac/kube-rbac-proxy/0.log" Nov 28 07:28:23 crc kubenswrapper[4955]: I1128 07:28:23.013854 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-4vhhh_e6317f0e-c7cd-47e6-be5d-2afe8d17c0ac/manager/0.log" Nov 28 07:28:23 crc kubenswrapper[4955]: I1128 07:28:23.117980 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mmqbs_0811185a-c49e-4a81-b6d7-c786f590177b/kube-rbac-proxy/0.log" Nov 28 07:28:23 crc kubenswrapper[4955]: I1128 07:28:23.148761 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-mmqbs_0811185a-c49e-4a81-b6d7-c786f590177b/manager/0.log" Nov 28 07:28:23 crc kubenswrapper[4955]: I1128 07:28:23.393206 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:28:23 crc kubenswrapper[4955]: I1128 07:28:23.393258 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:28:43 crc kubenswrapper[4955]: I1128 07:28:43.636064 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gn2ld_c9a5dd12-fb17-4fab-b1f9-9a005cc2877a/control-plane-machine-set-operator/0.log" Nov 28 07:28:43 crc kubenswrapper[4955]: I1128 07:28:43.842245 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c6dx_4b20c134-37f2-42c2-be5f-d6f4a86d7b10/kube-rbac-proxy/0.log" Nov 28 07:28:43 crc kubenswrapper[4955]: I1128 07:28:43.893272 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4c6dx_4b20c134-37f2-42c2-be5f-d6f4a86d7b10/machine-api-operator/0.log" Nov 28 07:28:53 crc kubenswrapper[4955]: I1128 07:28:53.392747 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:28:53 crc kubenswrapper[4955]: I1128 07:28:53.393308 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:28:58 crc kubenswrapper[4955]: I1128 07:28:58.152645 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rkw98_e9ab5ef6-2183-4170-87c6-5704f80d6073/cert-manager-controller/0.log" Nov 28 07:28:58 crc kubenswrapper[4955]: I1128 07:28:58.303039 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-4tlhd_e4d43868-a6d5-4a5f-8fd0-5a59b3fc47f2/cert-manager-cainjector/0.log" Nov 28 07:28:58 crc kubenswrapper[4955]: I1128 07:28:58.340962 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-p27v9_501578cc-adbd-424b-be8d-6bc4ea59655e/cert-manager-webhook/0.log" Nov 28 07:29:11 crc kubenswrapper[4955]: I1128 07:29:11.930778 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-6ksf2_bf9f276f-14a8-47e1-9eff-7faf202c0ec3/nmstate-console-plugin/0.log" Nov 28 07:29:12 crc kubenswrapper[4955]: I1128 07:29:12.080243 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ztr67_ab1b9bd2-d514-41c2-8315-b035a598caa9/nmstate-handler/0.log" Nov 28 07:29:12 crc kubenswrapper[4955]: I1128 07:29:12.182850 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sh662_68fae9ad-af93-49b7-a741-227d048c4ee4/kube-rbac-proxy/0.log" Nov 28 07:29:12 crc kubenswrapper[4955]: I1128 07:29:12.198325 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-sh662_68fae9ad-af93-49b7-a741-227d048c4ee4/nmstate-metrics/0.log" Nov 28 07:29:12 crc kubenswrapper[4955]: I1128 07:29:12.344139 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-8sdmt_c810a1d4-a881-4a83-b9cc-853762f772ee/nmstate-operator/0.log" Nov 28 07:29:12 crc kubenswrapper[4955]: I1128 07:29:12.396129 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-8zt87_8d206e9b-c6d4-4274-acf0-c404fd13eeaf/nmstate-webhook/0.log" Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.393116 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.393798 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.393855 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.396129 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.396218 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30" gracePeriod=600 Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.546860 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30" exitCode=0 Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.546906 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30"} Nov 28 07:29:23 crc kubenswrapper[4955]: I1128 07:29:23.546946 4955 scope.go:117] "RemoveContainer" containerID="65d666348582c8fe887dfdf0f86d643079c65930b4a96a06e605c0dcaba54c42" Nov 28 07:29:24 crc kubenswrapper[4955]: I1128 07:29:24.564108 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerStarted","Data":"fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1"} Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.503004 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-sjhcg_1f30adb5-d334-4ab0-9acc-8c83ca002efa/kube-rbac-proxy/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.659642 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-sjhcg_1f30adb5-d334-4ab0-9acc-8c83ca002efa/controller/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.715344 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.903368 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.911172 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.930517 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:29:28 crc kubenswrapper[4955]: I1128 07:29:28.940905 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.065612 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.076172 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.097433 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.150183 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.301987 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-frr-files/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.318090 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-metrics/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.321550 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/cp-reloader/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.360282 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/controller/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.534155 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/kube-rbac-proxy/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.546245 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/frr-metrics/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.564347 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/kube-rbac-proxy-frr/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.760962 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/reloader/0.log" Nov 28 07:29:29 crc kubenswrapper[4955]: I1128 07:29:29.780745 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-hgxx8_73d34f84-8626-4d9a-9f32-e5b041f75636/frr-k8s-webhook-server/0.log" Nov 28 07:29:30 crc kubenswrapper[4955]: I1128 07:29:30.038571 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7ffcc65867-np4gd_897e2a63-d58f-4bf7-b954-7614a0b8011b/manager/0.log" Nov 28 07:29:30 crc kubenswrapper[4955]: I1128 07:29:30.133806 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-674c4d7d9d-zw6p8_55e28775-8755-420c-9c5e-99506d84594e/webhook-server/0.log" Nov 28 07:29:30 crc kubenswrapper[4955]: I1128 07:29:30.242641 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmlb7_ca7c2d77-7f33-4e39-8cc0-4ac415b9d430/kube-rbac-proxy/0.log" Nov 28 07:29:30 crc kubenswrapper[4955]: I1128 07:29:30.815671 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cmlb7_ca7c2d77-7f33-4e39-8cc0-4ac415b9d430/speaker/0.log" Nov 28 07:29:30 crc kubenswrapper[4955]: I1128 07:29:30.875718 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-m4zws_fe5b3b04-5092-4f4f-b2e4-9b4ede37f887/frr/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.203407 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.386429 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.437140 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.437792 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.611752 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/util/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.654945 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/pull/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.670148 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffwqsr_dcba2b58-4038-4b2a-879e-466b64878a49/extract/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.784032 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:29:44 crc kubenswrapper[4955]: I1128 07:29:44.997663 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:29:45 crc kubenswrapper[4955]: I1128 07:29:45.002708 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:29:45 crc kubenswrapper[4955]: I1128 07:29:45.026348 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:29:45 crc kubenswrapper[4955]: I1128 07:29:45.137743 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/util/0.log" Nov 28 07:29:45 crc kubenswrapper[4955]: I1128 07:29:45.146240 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/pull/0.log" Nov 28 07:29:45 crc kubenswrapper[4955]: I1128 07:29:45.174609 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gs999_7a10e06e-4190-4e64-a8de-3470d1277a4c/extract/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.178191 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.357759 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.371715 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.387585 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.553551 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-content/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.596146 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/extract-utilities/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.783182 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.960178 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c272h_1d951c94-8c04-495f-b294-92a4cb70cd63/registry-server/0.log" Nov 28 07:29:46 crc kubenswrapper[4955]: I1128 07:29:46.992957 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.040874 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.061160 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.180020 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-utilities/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.256497 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/extract-content/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.388781 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nsv62_91faab58-aa75-49f0-bf54-3de5fccd9ead/marketplace-operator/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.759638 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.872674 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tqdwm_e33445dd-1d02-47a1-bb19-42033b44eaa4/registry-server/0.log" Nov 28 07:29:47 crc kubenswrapper[4955]: I1128 07:29:47.958874 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.008078 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.009731 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.147354 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-utilities/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.186805 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/extract-content/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.226394 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.229302 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mvv7h_f1490061-8d25-458d-825b-2006937f9b62/registry-server/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.400330 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.411434 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.419086 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.564323 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-utilities/0.log" Nov 28 07:29:48 crc kubenswrapper[4955]: I1128 07:29:48.564647 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/extract-content/0.log" Nov 28 07:29:49 crc kubenswrapper[4955]: I1128 07:29:49.104269 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5kxmn_c4041c69-e867-4601-977f-ffee8577f28c/registry-server/0.log" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.176633 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg"] Nov 28 07:30:00 crc kubenswrapper[4955]: E1128 07:30:00.185943 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c888b2c-b7b8-41f7-964c-0e64050d5299" containerName="container-00" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.185970 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c888b2c-b7b8-41f7-964c-0e64050d5299" containerName="container-00" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.186195 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c888b2c-b7b8-41f7-964c-0e64050d5299" containerName="container-00" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.186825 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.188602 4955 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.188772 4955 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.191859 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg"] Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.289553 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmwp\" (UniqueName: \"kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.289619 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.289701 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.391002 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwmwp\" (UniqueName: \"kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.391105 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.391193 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.392182 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.397772 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.410379 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwmwp\" (UniqueName: \"kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp\") pod \"collect-profiles-29405250-mg9rg\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.520243 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:00 crc kubenswrapper[4955]: I1128 07:30:00.970758 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg"] Nov 28 07:30:01 crc kubenswrapper[4955]: I1128 07:30:01.913412 4955 generic.go:334] "Generic (PLEG): container finished" podID="4c41c200-460c-4b48-bbb3-60d90619e8e8" containerID="6f12079c1707f37fa49a34e2090f237acd5bc3bca4f8d7903c9560f9271e3dee" exitCode=0 Nov 28 07:30:01 crc kubenswrapper[4955]: I1128 07:30:01.913457 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" event={"ID":"4c41c200-460c-4b48-bbb3-60d90619e8e8","Type":"ContainerDied","Data":"6f12079c1707f37fa49a34e2090f237acd5bc3bca4f8d7903c9560f9271e3dee"} Nov 28 07:30:01 crc kubenswrapper[4955]: I1128 07:30:01.913485 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" event={"ID":"4c41c200-460c-4b48-bbb3-60d90619e8e8","Type":"ContainerStarted","Data":"67433cdf127ce3a3097fa597866d8eff2f600430097672086c0e814e901a76bd"} Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.283289 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.452067 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume\") pod \"4c41c200-460c-4b48-bbb3-60d90619e8e8\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.452540 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume\") pod \"4c41c200-460c-4b48-bbb3-60d90619e8e8\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.452569 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwmwp\" (UniqueName: \"kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp\") pod \"4c41c200-460c-4b48-bbb3-60d90619e8e8\" (UID: \"4c41c200-460c-4b48-bbb3-60d90619e8e8\") " Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.452827 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c41c200-460c-4b48-bbb3-60d90619e8e8" (UID: "4c41c200-460c-4b48-bbb3-60d90619e8e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.453114 4955 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c41c200-460c-4b48-bbb3-60d90619e8e8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.458801 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c41c200-460c-4b48-bbb3-60d90619e8e8" (UID: "4c41c200-460c-4b48-bbb3-60d90619e8e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.461657 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp" (OuterVolumeSpecName: "kube-api-access-lwmwp") pod "4c41c200-460c-4b48-bbb3-60d90619e8e8" (UID: "4c41c200-460c-4b48-bbb3-60d90619e8e8"). InnerVolumeSpecName "kube-api-access-lwmwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.554820 4955 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c41c200-460c-4b48-bbb3-60d90619e8e8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.554857 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwmwp\" (UniqueName: \"kubernetes.io/projected/4c41c200-460c-4b48-bbb3-60d90619e8e8-kube-api-access-lwmwp\") on node \"crc\" DevicePath \"\"" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.931234 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" event={"ID":"4c41c200-460c-4b48-bbb3-60d90619e8e8","Type":"ContainerDied","Data":"67433cdf127ce3a3097fa597866d8eff2f600430097672086c0e814e901a76bd"} Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.931292 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405250-mg9rg" Nov 28 07:30:03 crc kubenswrapper[4955]: I1128 07:30:03.931306 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67433cdf127ce3a3097fa597866d8eff2f600430097672086c0e814e901a76bd" Nov 28 07:30:04 crc kubenswrapper[4955]: I1128 07:30:04.357559 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr"] Nov 28 07:30:04 crc kubenswrapper[4955]: I1128 07:30:04.369324 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405205-vqqmr"] Nov 28 07:30:05 crc kubenswrapper[4955]: I1128 07:30:05.715814 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b805d70-2eba-4e7f-af58-4c60699cc49e" path="/var/lib/kubelet/pods/9b805d70-2eba-4e7f-af58-4c60699cc49e/volumes" Nov 28 07:30:21 crc kubenswrapper[4955]: E1128 07:30:21.922775 4955 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.97:51882->38.102.83.97:33103: write tcp 38.102.83.97:51882->38.102.83.97:33103: write: broken pipe Nov 28 07:30:28 crc kubenswrapper[4955]: I1128 07:30:28.880068 4955 scope.go:117] "RemoveContainer" containerID="96224d33337769c38bc4ecf81501d596cf98dd6bb644f7bf2fe541414c33b809" Nov 28 07:30:31 crc kubenswrapper[4955]: I1128 07:30:31.901802 4955 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5ce3cc8f-9d19-49fa-83a9-d71cf669d26c" containerName="galera" probeResult="failure" output="command timed out" Nov 28 07:31:23 crc kubenswrapper[4955]: I1128 07:31:23.392936 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:31:23 crc kubenswrapper[4955]: I1128 07:31:23.393960 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:31:25 crc kubenswrapper[4955]: I1128 07:31:25.855172 4955 generic.go:334] "Generic (PLEG): container finished" podID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerID="6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7" exitCode=0 Nov 28 07:31:25 crc kubenswrapper[4955]: I1128 07:31:25.855283 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" event={"ID":"2c0d2fd4-ec67-4a89-8549-3db777959f88","Type":"ContainerDied","Data":"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7"} Nov 28 07:31:25 crc kubenswrapper[4955]: I1128 07:31:25.856847 4955 scope.go:117] "RemoveContainer" containerID="6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7" Nov 28 07:31:26 crc kubenswrapper[4955]: I1128 07:31:26.757236 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x6zl9_must-gather-rqb4p_2c0d2fd4-ec67-4a89-8549-3db777959f88/gather/0.log" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.414050 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:34 crc kubenswrapper[4955]: E1128 07:31:34.415022 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c41c200-460c-4b48-bbb3-60d90619e8e8" containerName="collect-profiles" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.415033 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c41c200-460c-4b48-bbb3-60d90619e8e8" containerName="collect-profiles" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.415248 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c41c200-460c-4b48-bbb3-60d90619e8e8" containerName="collect-profiles" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.416541 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.456951 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.534442 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr47\" (UniqueName: \"kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.534562 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.534979 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.636697 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.636777 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqr47\" (UniqueName: \"kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.636804 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.637255 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.637481 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.658071 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqr47\" (UniqueName: \"kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47\") pod \"redhat-operators-mrbz2\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:34 crc kubenswrapper[4955]: I1128 07:31:34.785033 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:35 crc kubenswrapper[4955]: I1128 07:31:35.233875 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:35 crc kubenswrapper[4955]: W1128 07:31:35.239626 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode66d3b32_3dcb_47e7_a241_df2fb025e60e.slice/crio-4ad51b7cba0805245b44f72d0411c8e969f67d15c3f03312e7c2827d715cb051 WatchSource:0}: Error finding container 4ad51b7cba0805245b44f72d0411c8e969f67d15c3f03312e7c2827d715cb051: Status 404 returned error can't find the container with id 4ad51b7cba0805245b44f72d0411c8e969f67d15c3f03312e7c2827d715cb051 Nov 28 07:31:35 crc kubenswrapper[4955]: I1128 07:31:35.721243 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-x6zl9/must-gather-rqb4p"] Nov 28 07:31:35 crc kubenswrapper[4955]: I1128 07:31:35.721937 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="copy" containerID="cri-o://078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10" gracePeriod=2 Nov 28 07:31:35 crc kubenswrapper[4955]: I1128 07:31:35.730437 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-x6zl9/must-gather-rqb4p"] Nov 28 07:31:35 crc kubenswrapper[4955]: I1128 07:31:35.975150 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerStarted","Data":"4ad51b7cba0805245b44f72d0411c8e969f67d15c3f03312e7c2827d715cb051"} Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.267976 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x6zl9_must-gather-rqb4p_2c0d2fd4-ec67-4a89-8549-3db777959f88/copy/0.log" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.268930 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.282092 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output\") pod \"2c0d2fd4-ec67-4a89-8549-3db777959f88\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.384124 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjtcj\" (UniqueName: \"kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj\") pod \"2c0d2fd4-ec67-4a89-8549-3db777959f88\" (UID: \"2c0d2fd4-ec67-4a89-8549-3db777959f88\") " Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.401916 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj" (OuterVolumeSpecName: "kube-api-access-xjtcj") pod "2c0d2fd4-ec67-4a89-8549-3db777959f88" (UID: "2c0d2fd4-ec67-4a89-8549-3db777959f88"). InnerVolumeSpecName "kube-api-access-xjtcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.424469 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2c0d2fd4-ec67-4a89-8549-3db777959f88" (UID: "2c0d2fd4-ec67-4a89-8549-3db777959f88"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.485999 4955 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2c0d2fd4-ec67-4a89-8549-3db777959f88-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.486039 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjtcj\" (UniqueName: \"kubernetes.io/projected/2c0d2fd4-ec67-4a89-8549-3db777959f88-kube-api-access-xjtcj\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.987892 4955 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-x6zl9_must-gather-rqb4p_2c0d2fd4-ec67-4a89-8549-3db777959f88/copy/0.log" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.991926 4955 generic.go:334] "Generic (PLEG): container finished" podID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerID="078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10" exitCode=143 Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.992016 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-x6zl9/must-gather-rqb4p" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.992057 4955 scope.go:117] "RemoveContainer" containerID="078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10" Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.998040 4955 generic.go:334] "Generic (PLEG): container finished" podID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" containerID="4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846" exitCode=0 Nov 28 07:31:36 crc kubenswrapper[4955]: I1128 07:31:36.998109 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerDied","Data":"4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846"} Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.000083 4955 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.038278 4955 scope.go:117] "RemoveContainer" containerID="6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7" Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.142684 4955 scope.go:117] "RemoveContainer" containerID="078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10" Nov 28 07:31:37 crc kubenswrapper[4955]: E1128 07:31:37.143871 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10\": container with ID starting with 078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10 not found: ID does not exist" containerID="078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10" Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.143904 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10"} err="failed to get container status \"078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10\": rpc error: code = NotFound desc = could not find container \"078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10\": container with ID starting with 078a8180e3af18130dcd6e0fdd8962b152e9342bbd694a215dd1e13bc1f38d10 not found: ID does not exist" Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.143927 4955 scope.go:117] "RemoveContainer" containerID="6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7" Nov 28 07:31:37 crc kubenswrapper[4955]: E1128 07:31:37.145011 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7\": container with ID starting with 6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7 not found: ID does not exist" containerID="6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7" Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.145035 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7"} err="failed to get container status \"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7\": rpc error: code = NotFound desc = could not find container \"6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7\": container with ID starting with 6910027c0c029ed0d764c3b7c59c45f8a1f70518c8da145b519a15b9c3a0efc7 not found: ID does not exist" Nov 28 07:31:37 crc kubenswrapper[4955]: I1128 07:31:37.717841 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" path="/var/lib/kubelet/pods/2c0d2fd4-ec67-4a89-8549-3db777959f88/volumes" Nov 28 07:31:38 crc kubenswrapper[4955]: I1128 07:31:38.010961 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerStarted","Data":"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8"} Nov 28 07:31:39 crc kubenswrapper[4955]: I1128 07:31:39.022167 4955 generic.go:334] "Generic (PLEG): container finished" podID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" containerID="79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8" exitCode=0 Nov 28 07:31:39 crc kubenswrapper[4955]: I1128 07:31:39.022263 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerDied","Data":"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8"} Nov 28 07:31:40 crc kubenswrapper[4955]: I1128 07:31:40.032699 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerStarted","Data":"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427"} Nov 28 07:31:40 crc kubenswrapper[4955]: I1128 07:31:40.055453 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mrbz2" podStartSLOduration=3.276851454 podStartE2EDuration="6.05543634s" podCreationTimestamp="2025-11-28 07:31:34 +0000 UTC" firstStartedPulling="2025-11-28 07:31:36.99978212 +0000 UTC m=+4219.589037690" lastFinishedPulling="2025-11-28 07:31:39.778367006 +0000 UTC m=+4222.367622576" observedRunningTime="2025-11-28 07:31:40.05159857 +0000 UTC m=+4222.640854140" watchObservedRunningTime="2025-11-28 07:31:40.05543634 +0000 UTC m=+4222.644691910" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.599172 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:42 crc kubenswrapper[4955]: E1128 07:31:42.601088 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="copy" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.601213 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="copy" Nov 28 07:31:42 crc kubenswrapper[4955]: E1128 07:31:42.601317 4955 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="gather" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.601398 4955 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="gather" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.601736 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="copy" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.601854 4955 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0d2fd4-ec67-4a89-8549-3db777959f88" containerName="gather" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.604904 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.629460 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.697763 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxnz\" (UniqueName: \"kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.698743 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.698897 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.800924 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.800989 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.801542 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.801589 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.801763 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxnz\" (UniqueName: \"kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.821476 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxnz\" (UniqueName: \"kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz\") pod \"community-operators-zs5dq\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:42 crc kubenswrapper[4955]: I1128 07:31:42.940735 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.749203 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.786176 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.786445 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.802487 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.804429 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.822811 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.942004 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbnxf\" (UniqueName: \"kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.942062 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.942188 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:44 crc kubenswrapper[4955]: I1128 07:31:44.999805 4955 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.001701 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.014525 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.044730 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.044832 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbnxf\" (UniqueName: \"kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.044863 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.045338 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.045560 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.065546 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbnxf\" (UniqueName: \"kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf\") pod \"redhat-marketplace-srprm\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.078544 4955 generic.go:334] "Generic (PLEG): container finished" podID="5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" containerID="15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2" exitCode=0 Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.078588 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerDied","Data":"15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2"} Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.078613 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerStarted","Data":"37f74261cc608588e6caced7828a4a2cdc5561b38dfdf0ff36297b23f4899552"} Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.146078 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr6r6\" (UniqueName: \"kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.146190 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.146231 4955 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.189007 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.247904 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.247956 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.248124 4955 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr6r6\" (UniqueName: \"kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.248553 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.248713 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.267341 4955 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr6r6\" (UniqueName: \"kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6\") pod \"certified-operators-sww9k\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.432689 4955 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.760240 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.841095 4955 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mrbz2" podUID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" containerName="registry-server" probeResult="failure" output=< Nov 28 07:31:45 crc kubenswrapper[4955]: timeout: failed to connect service ":50051" within 1s Nov 28 07:31:45 crc kubenswrapper[4955]: > Nov 28 07:31:45 crc kubenswrapper[4955]: I1128 07:31:45.960328 4955 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:31:46 crc kubenswrapper[4955]: W1128 07:31:46.308603 4955 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08f8a07d_ce19_4ec8_b9df_efa392804058.slice/crio-12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551 WatchSource:0}: Error finding container 12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551: Status 404 returned error can't find the container with id 12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551 Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.101262 4955 generic.go:334] "Generic (PLEG): container finished" podID="5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" containerID="c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975" exitCode=0 Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.101322 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerDied","Data":"c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975"} Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.104297 4955 generic.go:334] "Generic (PLEG): container finished" podID="08f8a07d-ce19-4ec8-b9df-efa392804058" containerID="bf289fad17c9d19e9cf5ed37d01965b0042b655c52e1e217571ae9844c8b3361" exitCode=0 Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.104372 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerDied","Data":"bf289fad17c9d19e9cf5ed37d01965b0042b655c52e1e217571ae9844c8b3361"} Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.104419 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerStarted","Data":"12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551"} Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.107777 4955 generic.go:334] "Generic (PLEG): container finished" podID="2b27fa67-5a58-42d9-96bd-94d5308b9d07" containerID="b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07" exitCode=0 Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.107809 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerDied","Data":"b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07"} Nov 28 07:31:47 crc kubenswrapper[4955]: I1128 07:31:47.107829 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerStarted","Data":"081382d30bf16f81686b4179168ddd2e2a870ed4ca4a4bd75d50b227e8b49b1a"} Nov 28 07:31:48 crc kubenswrapper[4955]: I1128 07:31:48.121426 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerStarted","Data":"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d"} Nov 28 07:31:48 crc kubenswrapper[4955]: I1128 07:31:48.163682 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zs5dq" podStartSLOduration=3.703383397 podStartE2EDuration="6.163666897s" podCreationTimestamp="2025-11-28 07:31:42 +0000 UTC" firstStartedPulling="2025-11-28 07:31:45.08060202 +0000 UTC m=+4227.669857580" lastFinishedPulling="2025-11-28 07:31:47.5408855 +0000 UTC m=+4230.130141080" observedRunningTime="2025-11-28 07:31:48.161459293 +0000 UTC m=+4230.750714873" watchObservedRunningTime="2025-11-28 07:31:48.163666897 +0000 UTC m=+4230.752922467" Nov 28 07:31:49 crc kubenswrapper[4955]: I1128 07:31:49.129997 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerStarted","Data":"8d362881b4611159e40010db4db5e2b995e7485f470afe4e50307efe105d5aa5"} Nov 28 07:31:49 crc kubenswrapper[4955]: I1128 07:31:49.132212 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerStarted","Data":"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5"} Nov 28 07:31:50 crc kubenswrapper[4955]: I1128 07:31:50.142844 4955 generic.go:334] "Generic (PLEG): container finished" podID="2b27fa67-5a58-42d9-96bd-94d5308b9d07" containerID="58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5" exitCode=0 Nov 28 07:31:50 crc kubenswrapper[4955]: I1128 07:31:50.142924 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerDied","Data":"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5"} Nov 28 07:31:50 crc kubenswrapper[4955]: I1128 07:31:50.146234 4955 generic.go:334] "Generic (PLEG): container finished" podID="08f8a07d-ce19-4ec8-b9df-efa392804058" containerID="8d362881b4611159e40010db4db5e2b995e7485f470afe4e50307efe105d5aa5" exitCode=0 Nov 28 07:31:50 crc kubenswrapper[4955]: I1128 07:31:50.146587 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerDied","Data":"8d362881b4611159e40010db4db5e2b995e7485f470afe4e50307efe105d5aa5"} Nov 28 07:31:51 crc kubenswrapper[4955]: I1128 07:31:51.162595 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerStarted","Data":"19c31f62e73b374d12446285d040a1243ca76dc72696f8d925c6ab21c8184b3e"} Nov 28 07:31:51 crc kubenswrapper[4955]: I1128 07:31:51.166817 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerStarted","Data":"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7"} Nov 28 07:31:51 crc kubenswrapper[4955]: I1128 07:31:51.201291 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sww9k" podStartSLOduration=3.561587873 podStartE2EDuration="7.201264569s" podCreationTimestamp="2025-11-28 07:31:44 +0000 UTC" firstStartedPulling="2025-11-28 07:31:47.106132845 +0000 UTC m=+4229.695388425" lastFinishedPulling="2025-11-28 07:31:50.745809551 +0000 UTC m=+4233.335065121" observedRunningTime="2025-11-28 07:31:51.18699319 +0000 UTC m=+4233.776248840" watchObservedRunningTime="2025-11-28 07:31:51.201264569 +0000 UTC m=+4233.790520169" Nov 28 07:31:51 crc kubenswrapper[4955]: I1128 07:31:51.207468 4955 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-srprm" podStartSLOduration=3.651886202 podStartE2EDuration="7.207438046s" podCreationTimestamp="2025-11-28 07:31:44 +0000 UTC" firstStartedPulling="2025-11-28 07:31:47.109763559 +0000 UTC m=+4229.699019169" lastFinishedPulling="2025-11-28 07:31:50.665315413 +0000 UTC m=+4233.254571013" observedRunningTime="2025-11-28 07:31:51.206055737 +0000 UTC m=+4233.795311347" watchObservedRunningTime="2025-11-28 07:31:51.207438046 +0000 UTC m=+4233.796693636" Nov 28 07:31:52 crc kubenswrapper[4955]: I1128 07:31:52.941943 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:52 crc kubenswrapper[4955]: I1128 07:31:52.942562 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:53 crc kubenswrapper[4955]: I1128 07:31:53.032626 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:53 crc kubenswrapper[4955]: I1128 07:31:53.304594 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:53 crc kubenswrapper[4955]: I1128 07:31:53.392607 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:31:53 crc kubenswrapper[4955]: I1128 07:31:53.392673 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:31:54 crc kubenswrapper[4955]: I1128 07:31:54.876888 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:54 crc kubenswrapper[4955]: I1128 07:31:54.956002 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.189624 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.189790 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.246782 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.435064 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.435102 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:55 crc kubenswrapper[4955]: I1128 07:31:55.484302 4955 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:56 crc kubenswrapper[4955]: I1128 07:31:56.950089 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:31:56 crc kubenswrapper[4955]: I1128 07:31:56.976831 4955 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.193044 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.193320 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zs5dq" podUID="5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" containerName="registry-server" containerID="cri-o://bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d" gracePeriod=2 Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.391994 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.392620 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mrbz2" podUID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" containerName="registry-server" containerID="cri-o://4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427" gracePeriod=2 Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.674413 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.707565 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities\") pod \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.707632 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content\") pod \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.707812 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckxnz\" (UniqueName: \"kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz\") pod \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\" (UID: \"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.708930 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities" (OuterVolumeSpecName: "utilities") pod "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" (UID: "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.737796 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz" (OuterVolumeSpecName: "kube-api-access-ckxnz") pod "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" (UID: "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5"). InnerVolumeSpecName "kube-api-access-ckxnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.772069 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.794216 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" (UID: "5e58b9f5-3095-4beb-9ce0-a59dbea3cce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.810118 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities\") pod \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.810178 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqr47\" (UniqueName: \"kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47\") pod \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.810407 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content\") pod \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\" (UID: \"e66d3b32-3dcb-47e7-a241-df2fb025e60e\") " Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.811210 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.811232 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.811247 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckxnz\" (UniqueName: \"kubernetes.io/projected/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5-kube-api-access-ckxnz\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.813037 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities" (OuterVolumeSpecName: "utilities") pod "e66d3b32-3dcb-47e7-a241-df2fb025e60e" (UID: "e66d3b32-3dcb-47e7-a241-df2fb025e60e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.819009 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47" (OuterVolumeSpecName: "kube-api-access-zqr47") pod "e66d3b32-3dcb-47e7-a241-df2fb025e60e" (UID: "e66d3b32-3dcb-47e7-a241-df2fb025e60e"). InnerVolumeSpecName "kube-api-access-zqr47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.913101 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.913134 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqr47\" (UniqueName: \"kubernetes.io/projected/e66d3b32-3dcb-47e7-a241-df2fb025e60e-kube-api-access-zqr47\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:57 crc kubenswrapper[4955]: I1128 07:31:57.918422 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e66d3b32-3dcb-47e7-a241-df2fb025e60e" (UID: "e66d3b32-3dcb-47e7-a241-df2fb025e60e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.014765 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e66d3b32-3dcb-47e7-a241-df2fb025e60e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.239774 4955 generic.go:334] "Generic (PLEG): container finished" podID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" containerID="4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427" exitCode=0 Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.239838 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mrbz2" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.239846 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerDied","Data":"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427"} Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.239870 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mrbz2" event={"ID":"e66d3b32-3dcb-47e7-a241-df2fb025e60e","Type":"ContainerDied","Data":"4ad51b7cba0805245b44f72d0411c8e969f67d15c3f03312e7c2827d715cb051"} Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.239887 4955 scope.go:117] "RemoveContainer" containerID="4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.243189 4955 generic.go:334] "Generic (PLEG): container finished" podID="5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" containerID="bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d" exitCode=0 Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.243250 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerDied","Data":"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d"} Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.243271 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zs5dq" event={"ID":"5e58b9f5-3095-4beb-9ce0-a59dbea3cce5","Type":"ContainerDied","Data":"37f74261cc608588e6caced7828a4a2cdc5561b38dfdf0ff36297b23f4899552"} Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.243303 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zs5dq" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.277973 4955 scope.go:117] "RemoveContainer" containerID="79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8" Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.284592 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.295673 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mrbz2"] Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.305281 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.314760 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zs5dq"] Nov 28 07:31:58 crc kubenswrapper[4955]: I1128 07:31:58.830733 4955 scope.go:117] "RemoveContainer" containerID="4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.011351 4955 scope.go:117] "RemoveContainer" containerID="4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.011883 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427\": container with ID starting with 4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427 not found: ID does not exist" containerID="4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.011937 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427"} err="failed to get container status \"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427\": rpc error: code = NotFound desc = could not find container \"4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427\": container with ID starting with 4e0c802ed233dc059d8fff9ca22b6753a89985b1ae0370e54dc3f8c4842bc427 not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.011969 4955 scope.go:117] "RemoveContainer" containerID="79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.012284 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8\": container with ID starting with 79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8 not found: ID does not exist" containerID="79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.012328 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8"} err="failed to get container status \"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8\": rpc error: code = NotFound desc = could not find container \"79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8\": container with ID starting with 79348739b2b9712c314e4b8aa4cae64d7bb641c9cf9edcf8dd455068d014f5e8 not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.012363 4955 scope.go:117] "RemoveContainer" containerID="4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.012734 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846\": container with ID starting with 4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846 not found: ID does not exist" containerID="4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.012763 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846"} err="failed to get container status \"4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846\": rpc error: code = NotFound desc = could not find container \"4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846\": container with ID starting with 4608e948442db1aba55f0c2010260094ae2a0115c7d50865d09ba4a9cd562846 not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.012781 4955 scope.go:117] "RemoveContainer" containerID="bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.042720 4955 scope.go:117] "RemoveContainer" containerID="c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.090157 4955 scope.go:117] "RemoveContainer" containerID="15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.143156 4955 scope.go:117] "RemoveContainer" containerID="bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.143693 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d\": container with ID starting with bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d not found: ID does not exist" containerID="bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.143844 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d"} err="failed to get container status \"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d\": rpc error: code = NotFound desc = could not find container \"bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d\": container with ID starting with bab9bf664bb13e15f7d585d0da2ea2d6e3d3c6b9e57fc91da45e3cd458ded02d not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.143946 4955 scope.go:117] "RemoveContainer" containerID="c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.144548 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975\": container with ID starting with c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975 not found: ID does not exist" containerID="c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.144589 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975"} err="failed to get container status \"c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975\": rpc error: code = NotFound desc = could not find container \"c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975\": container with ID starting with c306eb1293b87a8b5d02ec778b74daccbc3dc3a08e0dc4fd693b9ed973a5f975 not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.144615 4955 scope.go:117] "RemoveContainer" containerID="15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2" Nov 28 07:31:59 crc kubenswrapper[4955]: E1128 07:31:59.144882 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2\": container with ID starting with 15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2 not found: ID does not exist" containerID="15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.144906 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2"} err="failed to get container status \"15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2\": rpc error: code = NotFound desc = could not find container \"15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2\": container with ID starting with 15b2621a2b2aff66d378885d4e4c1ac3acb97f9cebf8b9115136c8842d6fdfd2 not found: ID does not exist" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.599223 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.599615 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-srprm" podUID="2b27fa67-5a58-42d9-96bd-94d5308b9d07" containerName="registry-server" containerID="cri-o://41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7" gracePeriod=2 Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.727306 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e58b9f5-3095-4beb-9ce0-a59dbea3cce5" path="/var/lib/kubelet/pods/5e58b9f5-3095-4beb-9ce0-a59dbea3cce5/volumes" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.728140 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e66d3b32-3dcb-47e7-a241-df2fb025e60e" path="/var/lib/kubelet/pods/e66d3b32-3dcb-47e7-a241-df2fb025e60e/volumes" Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.800490 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:31:59 crc kubenswrapper[4955]: I1128 07:31:59.800826 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sww9k" podUID="08f8a07d-ce19-4ec8-b9df-efa392804058" containerName="registry-server" containerID="cri-o://19c31f62e73b374d12446285d040a1243ca76dc72696f8d925c6ab21c8184b3e" gracePeriod=2 Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.130465 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.261006 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbnxf\" (UniqueName: \"kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf\") pod \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.261209 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities\") pod \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.261255 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content\") pod \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\" (UID: \"2b27fa67-5a58-42d9-96bd-94d5308b9d07\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.262114 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities" (OuterVolumeSpecName: "utilities") pod "2b27fa67-5a58-42d9-96bd-94d5308b9d07" (UID: "2b27fa67-5a58-42d9-96bd-94d5308b9d07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.262444 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.268263 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf" (OuterVolumeSpecName: "kube-api-access-dbnxf") pod "2b27fa67-5a58-42d9-96bd-94d5308b9d07" (UID: "2b27fa67-5a58-42d9-96bd-94d5308b9d07"). InnerVolumeSpecName "kube-api-access-dbnxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.278716 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b27fa67-5a58-42d9-96bd-94d5308b9d07" (UID: "2b27fa67-5a58-42d9-96bd-94d5308b9d07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.283303 4955 generic.go:334] "Generic (PLEG): container finished" podID="2b27fa67-5a58-42d9-96bd-94d5308b9d07" containerID="41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7" exitCode=0 Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.283375 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerDied","Data":"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7"} Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.283393 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-srprm" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.283401 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-srprm" event={"ID":"2b27fa67-5a58-42d9-96bd-94d5308b9d07","Type":"ContainerDied","Data":"081382d30bf16f81686b4179168ddd2e2a870ed4ca4a4bd75d50b227e8b49b1a"} Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.283426 4955 scope.go:117] "RemoveContainer" containerID="41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.288166 4955 generic.go:334] "Generic (PLEG): container finished" podID="08f8a07d-ce19-4ec8-b9df-efa392804058" containerID="19c31f62e73b374d12446285d040a1243ca76dc72696f8d925c6ab21c8184b3e" exitCode=0 Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.288208 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerDied","Data":"19c31f62e73b374d12446285d040a1243ca76dc72696f8d925c6ab21c8184b3e"} Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.288238 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sww9k" event={"ID":"08f8a07d-ce19-4ec8-b9df-efa392804058","Type":"ContainerDied","Data":"12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551"} Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.288254 4955 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12fdf4f9f2edaa8747f277c0f955d72f9004bbecba3799b6efe38e4fb22cf551" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.335591 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.353390 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.354542 4955 scope.go:117] "RemoveContainer" containerID="58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.363163 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-srprm"] Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.364992 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b27fa67-5a58-42d9-96bd-94d5308b9d07-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.365102 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbnxf\" (UniqueName: \"kubernetes.io/projected/2b27fa67-5a58-42d9-96bd-94d5308b9d07-kube-api-access-dbnxf\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.387754 4955 scope.go:117] "RemoveContainer" containerID="b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.412715 4955 scope.go:117] "RemoveContainer" containerID="41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7" Nov 28 07:32:00 crc kubenswrapper[4955]: E1128 07:32:00.413370 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7\": container with ID starting with 41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7 not found: ID does not exist" containerID="41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.413411 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7"} err="failed to get container status \"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7\": rpc error: code = NotFound desc = could not find container \"41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7\": container with ID starting with 41ba123aea22c66d276fb7496ff421173d2a7004bbb51fae33f6e2458d17a8c7 not found: ID does not exist" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.413438 4955 scope.go:117] "RemoveContainer" containerID="58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5" Nov 28 07:32:00 crc kubenswrapper[4955]: E1128 07:32:00.413771 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5\": container with ID starting with 58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5 not found: ID does not exist" containerID="58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.413849 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5"} err="failed to get container status \"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5\": rpc error: code = NotFound desc = could not find container \"58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5\": container with ID starting with 58a2da43f5b69661c6d607c1229904f3fc6044c7a2add915d8e1e1e74aa739e5 not found: ID does not exist" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.413916 4955 scope.go:117] "RemoveContainer" containerID="b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07" Nov 28 07:32:00 crc kubenswrapper[4955]: E1128 07:32:00.414265 4955 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07\": container with ID starting with b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07 not found: ID does not exist" containerID="b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.414341 4955 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07"} err="failed to get container status \"b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07\": rpc error: code = NotFound desc = could not find container \"b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07\": container with ID starting with b13bd328267928014db24c357ecf9ffb6ae381b726e2e7743d0522d488d08f07 not found: ID does not exist" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.466285 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities\") pod \"08f8a07d-ce19-4ec8-b9df-efa392804058\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.466611 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr6r6\" (UniqueName: \"kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6\") pod \"08f8a07d-ce19-4ec8-b9df-efa392804058\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.466856 4955 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content\") pod \"08f8a07d-ce19-4ec8-b9df-efa392804058\" (UID: \"08f8a07d-ce19-4ec8-b9df-efa392804058\") " Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.466890 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities" (OuterVolumeSpecName: "utilities") pod "08f8a07d-ce19-4ec8-b9df-efa392804058" (UID: "08f8a07d-ce19-4ec8-b9df-efa392804058"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.472246 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6" (OuterVolumeSpecName: "kube-api-access-zr6r6") pod "08f8a07d-ce19-4ec8-b9df-efa392804058" (UID: "08f8a07d-ce19-4ec8-b9df-efa392804058"). InnerVolumeSpecName "kube-api-access-zr6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.536259 4955 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08f8a07d-ce19-4ec8-b9df-efa392804058" (UID: "08f8a07d-ce19-4ec8-b9df-efa392804058"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.569463 4955 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.569594 4955 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8a07d-ce19-4ec8-b9df-efa392804058-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:00 crc kubenswrapper[4955]: I1128 07:32:00.569619 4955 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr6r6\" (UniqueName: \"kubernetes.io/projected/08f8a07d-ce19-4ec8-b9df-efa392804058-kube-api-access-zr6r6\") on node \"crc\" DevicePath \"\"" Nov 28 07:32:01 crc kubenswrapper[4955]: I1128 07:32:01.300796 4955 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sww9k" Nov 28 07:32:01 crc kubenswrapper[4955]: I1128 07:32:01.349821 4955 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:32:01 crc kubenswrapper[4955]: I1128 07:32:01.362209 4955 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sww9k"] Nov 28 07:32:01 crc kubenswrapper[4955]: I1128 07:32:01.722159 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f8a07d-ce19-4ec8-b9df-efa392804058" path="/var/lib/kubelet/pods/08f8a07d-ce19-4ec8-b9df-efa392804058/volumes" Nov 28 07:32:01 crc kubenswrapper[4955]: I1128 07:32:01.723471 4955 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b27fa67-5a58-42d9-96bd-94d5308b9d07" path="/var/lib/kubelet/pods/2b27fa67-5a58-42d9-96bd-94d5308b9d07/volumes" Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.392707 4955 patch_prober.go:28] interesting pod/machine-config-daemon-lmmht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.393277 4955 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.393327 4955 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.394115 4955 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1"} pod="openshift-machine-config-operator/machine-config-daemon-lmmht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.394178 4955 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerName="machine-config-daemon" containerID="cri-o://fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" gracePeriod=600 Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.555052 4955 generic.go:334] "Generic (PLEG): container finished" podID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" exitCode=0 Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.555102 4955 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" event={"ID":"ad229ad8-9ea1-483d-a615-3f7d2ab408bc","Type":"ContainerDied","Data":"fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1"} Nov 28 07:32:23 crc kubenswrapper[4955]: I1128 07:32:23.555144 4955 scope.go:117] "RemoveContainer" containerID="8ee8210f6ade8bba459585a59eae91d5491f4c3ae83126bf725ba9f746531a30" Nov 28 07:32:23 crc kubenswrapper[4955]: E1128 07:32:23.609380 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:32:24 crc kubenswrapper[4955]: I1128 07:32:24.572145 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:32:24 crc kubenswrapper[4955]: E1128 07:32:24.572991 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:32:39 crc kubenswrapper[4955]: I1128 07:32:39.706568 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:32:39 crc kubenswrapper[4955]: E1128 07:32:39.707528 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:32:54 crc kubenswrapper[4955]: I1128 07:32:54.705598 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:32:54 crc kubenswrapper[4955]: E1128 07:32:54.706960 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:33:05 crc kubenswrapper[4955]: I1128 07:33:05.704294 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:33:05 crc kubenswrapper[4955]: E1128 07:33:05.705165 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:33:18 crc kubenswrapper[4955]: I1128 07:33:18.704874 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:33:18 crc kubenswrapper[4955]: E1128 07:33:18.705640 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:33:29 crc kubenswrapper[4955]: I1128 07:33:29.697647 4955 scope.go:117] "RemoveContainer" containerID="8ee057a4aff182c1d856e9b3d209d19173f7ea47686e99a77f7008a2faa9c0a3" Nov 28 07:33:33 crc kubenswrapper[4955]: I1128 07:33:33.704957 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:33:33 crc kubenswrapper[4955]: E1128 07:33:33.706060 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:33:44 crc kubenswrapper[4955]: I1128 07:33:44.704126 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:33:44 crc kubenswrapper[4955]: E1128 07:33:44.704921 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:33:55 crc kubenswrapper[4955]: I1128 07:33:55.704820 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:33:55 crc kubenswrapper[4955]: E1128 07:33:55.707335 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:34:06 crc kubenswrapper[4955]: I1128 07:34:06.703915 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:34:06 crc kubenswrapper[4955]: E1128 07:34:06.704726 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:34:21 crc kubenswrapper[4955]: I1128 07:34:21.705575 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:34:21 crc kubenswrapper[4955]: E1128 07:34:21.706624 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:34:35 crc kubenswrapper[4955]: I1128 07:34:35.705645 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:34:35 crc kubenswrapper[4955]: E1128 07:34:35.708390 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:34:47 crc kubenswrapper[4955]: I1128 07:34:47.720707 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:34:47 crc kubenswrapper[4955]: E1128 07:34:47.721799 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc" Nov 28 07:35:02 crc kubenswrapper[4955]: I1128 07:35:02.705263 4955 scope.go:117] "RemoveContainer" containerID="fb07320d8486e70ff6ff10fbf0ccab9f6f397392674510fad263fd8acc44c6a1" Nov 28 07:35:02 crc kubenswrapper[4955]: E1128 07:35:02.706627 4955 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lmmht_openshift-machine-config-operator(ad229ad8-9ea1-483d-a615-3f7d2ab408bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-lmmht" podUID="ad229ad8-9ea1-483d-a615-3f7d2ab408bc"